Friday, April 6, 2018

Less Distracting and More Effective Video Messaging


          Have you ever found yourself listening to music while browsing the internet (or writing a report) and felt the need to turn down the volume when trying to concentrate?  I find myself doing this quite often and have even noticed that, for me, certain musical genres can be less distracting.  Furthermore, I found that less dynamic (more compressed) studio mixes can also be more distracting.  Though you may not think you are giving it much attention, your brain is processing the music; the lyrics and complex melodies are both vying for your attention. Somehow, the music which you have been listening to for an extended period of time becomes noise or a distraction, even though the music isn’t physically blocking your eyes from reading or writing.   
          This phenomenon can be attributed to the cognitive load theory, an educational theory that states each cognitive task we approach and process has an intrinsic cognitive load.  This main task can then be disrupted by extraneous cognitive load, or “high levels of element interactivity…that unnecessarily increase the number of interacting elements that learners must process” (Paas & Sweller, 2016. p. 38).  Thinking back to my earlier example, it makes sense:  as you increase your concentration to putting words on a page, or reading and interpreting words, you suddenly find that the Eagles or Led Zeppelin are crowding your thought process, or increasing your cognitive load.  To combat this situation, you lower the volume, thusly decreasing the extraneous cognitive load.
          But then how is multimedia learning effective?  Online learning and interactive classrooms make use of both audio and visual components, sometimes in conjunction with a live instructor.  Our brains have a difficult time processing too much information into a single “channel,” such as our visual channel, where we might be relating information on a chart to our notes or the backside of the page.  The task stresses our visual channel, making it difficult to make connections to the interrelated content.  In the example of listening to music while working, we have difficulty concentrating because the music and content we are reading are also not interrelated and are causing cognitive dissonance.
          Instruction, then, must be carefully crafted with the modality principle in mind to ensure that the multiple sources of information using a mixed-mode presentation are complimentary and contribute to a superior learning experience (Low & Sweller, 2016. p.227).  To give an example, “when pictures and words are both presented visually, the visual processor can become overloaded but the auditory processor is unused.  When words are narrated, they can be dealt with in the auditory processor, thereby leaving he visual processor to deal with the pictures only.  Thus, the use of narrated animation reassigns some of the essential processing from the overloaded visual processor to the underloaded auditory processor” (Low & Sweller, 2016. p.238).  This information isn’t new, but I felt as though it needed to be mentioned for those who may not have heard of these principles before.  
          My real concern is how this applies to the content that I create every day — video.  With the ubiquity of video in our world, we should expect our students to be familiar learning from it and, therefore, focus on creating better instruction using it.  For this argument, I’m going to focus on one of the most basic approaches to video:  the talking head. You know, the medium shot of a person sitting in a chair who speaks directly to or slightly off camera?  Though typically regarded as being rather rudimentary or flat out boring, if done well, filming a subject talking to the camera can be a cost effective way to have someone relay information and engage your viewer.  
          In the article “The talking-head video 2.0:  Findings from eye-tracking research,” Pernice discusses ways to successfully keep the viewer’s attention with simple techniques.  However, “make no mistake, nothing is going to save a video with a dull message” (Pernice, 2017.).  The study followed the eye movements of viewers during a two-subject discussion.  Aside from having an enthusiastic and animated host, Pernice emphasizes the importance of changing the visual.  “When the video looks the same except for moving lips and blinking eyes, users get bored. But a greater chance in facial expression, subject position, and even camera angle can reawaken the user’s attention” (Pernice, 2017.).  
          While the article discusses aspects such as using less-distracting background elements and taking advantage of residual fixations, I want to focus on two aspects revealed in the study that support the modality principle.  The first actually has nothing to do with the video itself, but rather the content on the page that resides outside of the video frame.  As users listen in and inevitably become visually “bored,” their eyes begin to wander.  Having content on the page with the video that supports the topic of the video actually keeps users engaged and can reactivate their visual channel with relevant information and increase learning and focus. The second aspect that I’d like to note is that including appropriate graphics also helps to reinvigorate the visual channel.  “Visuals that appear in the video attract attention, and seeing and hearing the same message reinforces it to the users” (Pernice, 2017.).  Does that sound familiar?  
          Even though the results of this study were intended for a broadcast production audience, I do find it fascinating when educational theory finds a crossover with multimedia production.  As time goes on, these types of connections are going to become increasingly important for instructors as they search for more effective ways of teaching new audiences of all ages on a variety of digital learning platforms.



Low, R., & Sweller, J.  (2016).  The modality principle in multimedia learning.  In R.E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning (pp. 227-246).  Cambridge:  Cambridge University Press.

Paas, F., & Sweller, J.  (2016).  Implications of cognitive load theory for multimedia learning.  In R.E. Mayer (Ed.), The Cambridge Handbook of Multimedia Learning (pp. 27-42).  Cambridge:  Cambridge University Press.

Pernice, K. (2017, August 20). The talking-head video 2.0:  Findings from eye-tracking research. Retrieved from https://www.nngroup.com/articles/talking-head-video/

Monday, February 19, 2018

Visual Storytelling with Semiotics

        This afternoon I attended a workshop on visual storytelling at Princeton University's Center for Digital Humanities.  The workshop focused on four main Modalities of Design (listed below) and how to use semiotics to assign meaning to a custom visual language to develop a way of communication.  One important take away that I had, which I'm sure is typical in design work, was that design influences the intent of the content.  Though it's seems rather common-sense, hearing it during the workshop and seeing examples reinforced the importance of the concept.

        Sometimes I find that I focus too much on execution and I lose track of theory and principles, though it often embodies them both without me realizing it.  This workshop forced me to take a step back and think about crafting effective messaging using a new medium.  I chose to tell the story of planning my upcoming wedding in Las Vegas.  As a video editor, I tend to think linearly in my approach to storytelling, so my visualization reflected that.  I then tried a second approach and, after many attempts, was able to craft a more abstract visualization to represent my story.  I feel that my second visual wasn't as effective, but it was a great exercise and the leader of the workshop felt that it better associated shared story elements. However, I did learn a lot from seeing the other participants examples and hearing their thought process behind their unique approaches and was able to walk away with some new insights, particularly that a visual story doesn't have to have a concrete beginning and end; it can present information that leads to individual story threads that, overall, provide the viewer with a full view and understanding  The most challenging aspect for me was being able to tell a story with the least amount of visual elements.  Though it wasn't a requirement, I feel that a viewer can easily become overwhelmed with an extensive character map to reference back and forth to, which can potentially exceed their visual cognitive load causing them to lose focus, interest, and understanding.

Modalities of Design
(from the workshop)
  • Communication Design — delivering messages 
  • Interface/Interaction Design — user experience; behavior of products, people and systems 
  • Information (Visualization) Design — data visualization 
  • Critical Design — conceptual scenarios; hypothetical objects; social-political-cultural commentary, speculative
My Worksheet


Monday, July 31, 2017

Educational technology with the most potential

Within the past 10 years, communication devices have drastically changed the world and the way we all interact.  With the demise of the phone call, students will need to master all forms of communication for their future careers, most importantly written communication and video.  It’s an interesting proposition, to suggest video as a prominent form of communication, but it isn’t just because I do it for a living.  Video has become ubiquitous with social media. Scroll through Facebook or even Instagram; much of the shared content we see, from both peers and corporate organizations alike, is video content.  Video uses multiple ways of communicating messages.  Whether through direct information, storytelling, or visual portrayal, creating a video requires skills beyond just the written word.  Effectively communicating through video is similar to effectively communicating in real life, but can help to communicate intangibles and thoughts by bringing in creativity and audio/visual aids.  Understanding what goes into the production of a video will also help students to better organize their thoughts, effectively plan and outline, and consider how their message will connect with an audience, something that cannot be understated.

            While my current position isn’t tied to education, I do have the ability to teach within a digital learning lab where students can learn to use digital tools for their college courses.  I’d say that the greatest barrier to integrating technology within higher education is acceptance by the professors.  As time passes, more and more professors are starting to see the benefits of new methods of communication and have the desire to incorporate technology other than PowerPoint into their classrooms.  But for institutions and departments that are known for traditional methods of research and have proven to be strong in their field, the incentive to change is minimal.  My personal belief on choosing any new technology follow the thought process of:  is it proven?  What does a successful example look like? How difficult is it to learn?  Will it get in the way of the task at hand or will it augment it?  For me, I think that video projects and digital storytelling has positively answered all of those questions.

This post was shared in a discussion forum in my graduate course "Integration and Management of Computers for Learning" at Purdue University.

Monday, May 22, 2017

Demonstrating my disposition for life-long learning and continuous professional development

I have been a multimedia professional for the past 10 years and was hired one year ago as a Multimedia Specialist in the Office of Communications at Princeton University. Apart from visiting trade websites and analyzing online tutorials on a regular basis, I’d like to demonstrate my disposition for life-long learning and continuous professional development by discussing my recent experience with the largest conference in the United States for video professionals, the NAB Show (National Association of Broadcasters). NAB hosts two shows each year, one in New York City and another, larger show in Las Vegas. This year I was able to attend the event in New York City. Although I could not make the Las Vegas show, I still followed the conference online and kept up on keynote speakers and presentations.

At the New York City NAB Show, I was able to be hands-on with the latest equipment and speak with representatives about how to improve my workflow and output. I built relationships that began at this show and was able to follow up with both Canon and Panasonic representatives to have them come to my office, evaluate our current inventory, and discuss options for improving the quality of my office’s multimedia presence online. From those meetings, I was introduced to local vendors to help implement the new equipment into our workflow.

The artifact that I included is one of many presentations that I viewed online from the 2017 NAB Show in Las Vegas. This particular example demonstrated Clemson University’s athletic video production workflow, which not only produced web-ready content, but also expertly integrated with Adobe products to turn content around almost instantly for social media, which they referred to as “content velocity.” It was inspiring to see how they were able to organize a team of student videographers and a staff of content professionals to produce constant, and more importantly on-time and relevant, social media content during their football games. I was able to learn about new tools that Adobe had recently released and how Clemson was able to adopt them to streamline their production methods. Additionally, it was great to see their process for organization and retrieval and how efficient it was for a fast-paced environment. It is a presentation that I intend to share with my colleagues and social media team to see if my office could incorporate some of the tools they use so that we can produce better social media content and work more collaboratively as a whole.

I hope to continue to engage in these types of experiences to not only increase my own potential, but suggest new ideas and creative approaches to familiar situations within whatever working environment I find myself in the coming years.

Clemson Athletics: Social Media Video and Content Velocity
https://www.youtube.com/watch?v=g_i1V7ObJ58



This is a repost from a competency demonstration from my graduate work in Learning Design and Technology at Purdue University.



Friday, May 12, 2017

Adobe gives the Premiere an overhaul and how I plan to teach it

Adobe updates Premiere

Adobe released updates for 2017 to many of their Creative Cloud applications this past April.  To my daily editor, Adobe Premiere, they'd added some much appreciated updates, including a revision to the dated Title tool and updated audio mixing options.  While I haven't yet had a chance to work with the latest version yet (the update downloading as I type this blog post), I really look forward to the updated titles, which should speed up my workflow in adding motion graphics and animated lower-thirds to my projects directly rather than flipping between Premiere and After Effects.  However, DaVinci Resolve, a well-established color grading tool, has been slowly increasing the capabilities of the software and recently purchased and incorporated professional audio software.  This makes DaVinci significantly more appealing to someone like myself and its intuitive interface is certainly a welcome sight over Adobe's (though to Adobe's credit, they have gotten better with the inclusion of their workspace panels).  I plan to work in DaVinci Resolve for a smaller project to see how the workflow compares to Adobe's.

https://blogs.adobe.com/creativecloud/the-latest-and-greatest-for-premiere-pro-cc-and-media-encoder/?segment=dva


http://www.newsshooter.com/2017/05/03/51093/



Teaching Adobe Premiere this coming Fall

I also came across this Adobe Live Stream Series - How to Make Great Videos.  I am currently working on developing my own Adobe Premiere workshop for the Digital Learning Lab at Princeton.  I always like to see how other instructors approach teaching software and what they emphasize compared to what I emphasize.  While my future students could simply watch a series such as this one, which is an excellent in-depth view of Premiere, I will need to take a more targeted approach.  Students who will be attending my workshop will more than likely be doing so to fulfill a video requirement for a class.  Many of these students are taking four or five courses at Princeton and won't have the time to dedicate to six hours to learning a software package for a single assignment.  However, as I mentioned in an earlier blog post, this is the type of new literacy that I believe students will need to be successful in their future careers.  Knowing this, my approach will be to provide them with the technical skills, abilities, and judgement to make the fundamental choices for crafting messages in a digital story.  Because Premiere now handles multiple video formats without transcoding, students won't need to understand more complex things like video codecs, at least not initially.  My hope is to guide them toward creating effective stories and instill an interest to further develop a 21st century means of communication.

https://blogs.adobe.com/creativecloud/live-stream-series-how-to-make-great-videos/?segment=dva

Wednesday, March 29, 2017

Internet literacy in modern education

The Internet is definitely more of a literacy issue than a technology issue. “The Internet [should be seen] not as a technology but rather as a context in which to read, write, and communicate. The Internet is no more a technology than is a book” (Leu, O’Byrne, Zawilinski, McVerry & Everett-Cacopardo, 2009. p.265). The Internet is an access point to a 21st Century way of communication, reading, and information consumption. To view the Internet as a technology, or a tool, is to vastly underestimate its modern capabilities, ignore the functioning and connectivity of the modern world, and deprive students of valuable resources that will help them to assimilate into their careers and into modern society.

Leu et al. touched on the economic differences and test scores briefly when they wrote, “Children in the poorest school districts in the United States have the least amount of Internet access at home [and] the greatest pressure to raise scores on tests…and schools do not always prepare them for the new literacies of online reading comprehension at school” (p. 267). I think that moving forward, personal biases, such as the older generation of teachers and instructors who are technology averts, will begin to thin out as 21st Century students, like myself, being to make their way into the field. I hear about it now in public schools and even some colleges, where teachers don’t want to change they way they teach because they are tenured, have been doing it the same way for 15+ years, and just want to continue using the same method because it worked in the past. While it may have worked in the past, the world is ever changing and education should be as fluid as the real world. The second component that I find limiting the adoption of technology tools in the classroom is budgeting. Districts don’t allow enough funding to go toward technology integration, which is a difficult balance between tax dollars, population, and number of schools. I think that it will be difficult for the public school system to enable teachers to utilize technologies in the classroom until there is enough proven success from local private schools that perform as well or better on standardized testing. While standardized testing is another discussion, I believe it is what is holding back many schools from raising their technology standards because they don’t see the correlation of a partnered learning experience like Prensky (2010) suggests and a measurement of success statewide. I think from an Instructional Designer’s and future instructor’s standpoint, we can begin suggesting alternative ways to approach lessons that will begin to explore and showcase the benefits of technology and internet literacy so that others will be more willing to realize its place in the modern world.

This was a discussion post copied from my graduate school work in Learning Design & Technology at Purdue University.


References

Leu, D. J., O’Bryne, W. I., Zawilinski, L., McVerry, G., & Everett-Cacopardo, H. (2009). Expanding the new literacies conversation. Educational Researcher, 38(4), 264-269.


Prensky, M. (2010). Teaching Digital Natives - Partnering for Real Learning. Thousand Oaks, CA: Corwin, A SAGE Company.



Wednesday, March 22, 2017

The grey area of intelligence: Blurring the line between artificial and genuine intellect.


You’ve seen the films: 2001: A Space Odyssey, Bladerunner, Terminator, AI, Wall-E, and countless others. At a time when computer technology was in its infancy, it was exciting to think about the possibilities of robot intelligence and their relationships with humans. However, in the 21st century, amidst a vast expanse of computer engineering and learning technologies, integrative robots are rather common. They build our cars, build our computers, search for the closest burger joint, and even assess our online shopping habits. As engineers continue to push the limits of what computers and robots are capable of, could we, some day in the near future, find ourselves educating robots? Would educators be involved in the programming of AI to incorporate self-learning? What level of educational theory is integrated into designing these types of robots? Starting with a blank slate allows developers the flexibility to hyper-focus the “mind” of the robot on specific tasks without the learning differences between students or distractions humans encounter in everyday life. Being able to do this inherently cures many of the difficulties educators encounter when trying to teach others. Companies, such as OpenAI in Silicon Valley, are constantly working toward robot learning and communication.
“With early humans, language came from necessity. They learned to communicate because it helped them do other stuff, gave them an advantage over animals. These OpenAI researchers want to create the same dynamic for bots. In their virtual world, the bots not only learn their own language, they also use simple gestures and actions to communicate—pointing in particular direction, for instance, or actually guiding each other from place to place—much like babies do. That too is language, or at least a path to language.” (Wired, 2017.)
However, when will robots stop learning? Will they be programmed to learn only to improve upon specific tasks or will their curiosity be boundless? If they are connected to the internet, they could potentially never stop learning. Of course, each robot will be limited to its design; the amount of memory it can store, it’s functionality and articulation, and its battery power will presumably all be limiting factors in what a robot can actually accomplish. But is it no longer preposterous to ponder about a day when robots educate us?
“In the end, success will likely come from a combination of [learning and programming] techniques, not just one. And [researcher Igor] Mordatch is proposing yet another technique—one where bots don’t just learn to chat. They learn to chat in a language of their own making. As humans have shown, that is a powerful idea.” (Wired, 2017.)
I just hope Will Smith is still around to save us from the takeover.

References

https://www.wired.com/2017/03/openai-builds-bots-learn-speak-language/