I remember in the early days of elearning (back when we called it CBT if I remember rightly) being distraught at the idea of being constrained by the limitations (then, not that things have moved that far forward) by Citrix sessions that wouldn't allow for audio to work on the packages. Crazy to think you could have elearning without audio back then. Perhaps even more crazy can you imagine a phone where you didn't have audio, or a microphone to speak in to? Yet the large majority of young mobile phone users hardly make a call in the traditional sense - they use text or picture messaging to convey what they think. If we were to look back a few years ago to the features of mobile technology it might have been reasonable to think that text messaging was a stop-gap until something came along to replace it... yet here we are, not only as popular but more popular than ever. Studies back a few years ago revealed that the mobile phone's primary use for young types was text messaging (42% if you're into numbers, with safety next at 35% - Neilsen 2010). Whilst I'm referencing I pinched some of my ideas from this blog post: We Never Talk Any More
Surely these two things aren't related though; audio in learning and mobile phones? Well here's a theory or two on why and where we might be headed. One issue is without wearing headphones audio just isn't private. Ever navigate to a website (especially when you're involved in self-directed learning) and the video and audio just automatically cracks in loudly and everyone in the office looks round? One of my pet hates on web-sites is embedded video and audio that thinks it's fine to automatically start and push its way in. In fact if I Google how to do something and there are web pages or a Youtube video, I tend to head to the web-page first rather than straight to the video. The reason is, and here's my second point, that I have control over what I read, what I skim and how I progress. Most how-to videos start off by teaching granny how to suck eggs (or assume no knowledge if you're unfamiliar with the euphemism), they reinforce the old 'you know nothing, I know everything' model of teaching that some of us are working really hard to dispel. The ownership for my learning is suddenly in someone else's hands again and I don't want that.
The third point here is the push versus pull. Not only do I have to watch the video and listen to the audio to make sense of it, I can't control what's coming next - I can't even predict what's coming next. When I read an article or a blog I skim, if I like what I'm seeing I go back and forth sometimes or straight through other times but it's my choice, my direction, me that pulls the information rather than someone else who pushes what they push.
Then there's the other side of audio. The biggest issue with taking a phone call over a text message is the need to speak back. Sure I can listen (although there are those that argue I can't) but when I need to speak again privacy blows out the water. How many of you work in open plan offices? The reason people often don't like them is that very lack of privacy and in particular when speaking with others on the phone or via web-conferencing. In fact let's take this further into the world of technology. I've got an Apple Watch and I love it actually. The thing I love most about it is the haptics (the little vibrations the watch does to let me know what's going on). Then there's the biometric sensors and a very cool retina screen which actually does most of the micro-type messaging I want to see. I also like that there are (editable) one touch responses to things like Facebook messages or texts. Sure Siri is evolving and the voice commands on the watch are useful (for example when in a car). But outside of the novelty factor, I do feel a bit self-conscious if I have to speak to my watch in public. So if audio outputs can affect privacy, audio inputs are even more disruptive.
The other thing is the asynchronous nature of communications that are not audio based. I can for example multi-task and converse with several people at the same time, continue to do other things and it doesn't affect the quality or message that I send - the same can't be said for an audio conversation - much less video or face to face. It also allows me to give informed responses, checking online for solutions, sending through hyperlinked information and of course recording it all in a store for me so that I can scroll back through (or search) without ever having to consider writing something down. In the learning sense our learners can control beyond the direct learning - interact with others and use social learning practise without the pressure of having to watch the twenty minute one-way stream of 'knowledge' (don't get me started).
Where does that leave us? Typing text messages has led to a very developed 'thumb' for modern mobile users but surely the future lies in ways to make what we think appear on the screen or commands beyond the audible methods. For learning do we focus on input modes that don't rely on voice? To be honest most learning doesn't rely on a voice input and probably for good reason - can you imagine a room full of people trying to talk to an automated 'programme'? Voice recognition is still a little frustrating too, I've had some very unproductive 'conversations' with automated systems and Siri and I have somewhat of a love-hate relationship too!
Now just to clarify my slightly off-the-wall blog today, I do believe that video, streaming media, virtual reality, augmented reality and other audio-visual inputs and outputs have their place moving forwards and Youtube is still THE place to go for funny animals and a valuable learning resource, but I think the future needs to consider the disruption of audible communications and how that meshes with an increasingly crowded environments of people interacting with others not in the crowd.
So there you have it, audio doesn't always fall on deaf ears... and perhaps that's the problem, I look forward to the solution.