Table Plan Image (LucidChart)
Explain/Annotate the usage of AudioTool and ToneMatrix. [IMG from AudioTool]
(What is a ToneMatrix?)
(Why did I use it for my Expanding Sphere sounds?)
http://i.imgur.com/hRzrKUE.png
[Tone Audio File]
Explain/Annotate the editing of my tones [IMG from Audacity]
(Explain the process of splitting the tones into separate MP3 files)
(Why did I use Audacity?)
http://i.imgur.com/9IBzXJL.png
[2 Tone examples]
Can I apply any certain effects in Audacity?
Collecting Button Sounds and Open/Close pages with Zoom audio
Source: Mouth noises (Clicks)
Picture of Audio Zoom.
[2 Click examples]
Assigning sounds in Unity.
Unit 62 and 58 3D Sound Toy
Wednesday, 6 March 2013
Tuesday, 19 February 2013
Unity Research
Customisation
(19/02/2013) At this moment I am currently wavering on whether to use the built in GUIStyle that Unity offers with it's CSS like transformation tools or to import my own images that I've created with Photoshop. This link details the various properties that GUIStyle offers, at a first glance it does seem attractive however at this stage I am worrying about button animation (e.g. drop-shadow). If I am unable to re-create the desired affects I want in order to display a Button being selected and clicked on, I may have to resort to importing images for each instance. Additionally, my design requires for part of the button to have a large circle with an image inside next to the associated text, with the default GUIStyle that Unity offers I don't think that this affect is achieveable.
Never-the-less though, I've managed to learn how to create GUI buttons in Unity with a sophisticated Layout method and how to easily customise more than one control. Here is an image of my current GUI code for the Main camera.
Click this link to view an image of the Menu with the default button style.
Tomorrow (first day back from Half Term), I'm going to design my Menu in Adobe Photoshop and attempt to import the default buttons into Unity at College.
Screen Resolution
(20/02/2013) It seems that finding some code that can automatically change the size of the controls and GUI layout itself depending on the aspect ratio is quite hard. After trawling around the Unity forums and Google for the good part of a few hours I've finally decided to design my Interactive Sound Toy for a specific resolution. I've chosen this because I do believe Unity doesn't have the capability for major varying screen sizes. I have come across some solutions such as "Orientation Control" however I personally am not willing to dish out money for a Student project that is to be completely in 3 weeks. Another solution I found was using a GUI matrix to rescale it for the screen size. At this moment this method does work in some manners, however I'm not entirely sure how this will function if I was to have a large number of different controls on my screen (e.g. designing the game page).
I think at this stage I'm going to focus on designing for a specific ratio as going too far into changing the resolution is going stray me off my goals and aims for this unit.
(20/02/2013) I think I've finally decided that I'll use percentages to calculate where to position my controls for my GUI. I know that the final product will lack a scaling functionality but by using percentages instead of integers the position of the elements will be some-what relative to the screen none-the-less.
In the video below is a quick scaling text, obviously the controls themself don't scale, but I've managed to keep the positioning of these elements relative to the screen size. The code below the video is what I used to do this.
I have considered using Screen.width and Screen.height in the size however it seems the font size in the stylisation just gets cropped out. Perhaps using a different method of stylisation can mean I can actually scale my controls.
Camera Control
Been researching for the past hour or 2 on various camera rotation scripts, managed to find an improved version of the default one that is supplied in the standard assets. This one not only rotates fully around a gameobject but can also zoom in and out with limitations. However though if I am to match what I've planned I must make the camera only rotate around the object when I'm clicking down the middle mouse button, because I want to use the code that I've found I'm going to use the Unity Forums to ask my question.
A design feature that I may be changing at this stage however is the position of the cube when the level is loaded. It might be better if the GUI buttons were put further to the side and the object itself centered in the middle, this can help with visibility when zooming in and out.
One last script I've added today was something I was researching yesterday, as I was unable to find a desired texture for my background yesterday I'm deciding to use a solid colour. However though this script I've found here, allows me to add a simple gradient to my background.
Spacing out Selectable Dots and Bugs
(27/02/2013) Luckily instead of using Mograph in Cinema 4D and cloning a sphere to be imported into Unity, I've managed to space out multiple sphere gameobjects. By spacing each object by around 0.33 I've been able to fit a 5 by 5 by 5 cube of small spheres. I plan for each one of these spheres to instigate an expanding sphere which I'll hopefully work on tomorrow. However, by placing these spheres inside the cube a bug has occurred which automatically zooms in when I rotate around a certain area, hopefully by consulting with my tutor and the Unity forums tomorrow I can fix this error.
RayCasting
(03/03/2013) A few days ago I managed to place all of the Selectable dots around my cube in the manner that was most desired. However though the colliders for the Holding Box stopped the user from clicking on one of the dots to spawn a sphere, therefore I needed to use raycasting. At first I managed to easily write out a Raycast and draw it so I could see where it was going although at first it was still hitting the box before the dots. I therefore put the Holding Box on a different layer and told the raycast to ignore all gameobjects that are on that specific layer. The code I used is in the image below.
Til this date I've managed to use a raycast to get through the box and reach all of the 125 dots successfully, baring in mind though other issues have occured. For example I was originally using a prefab for each of the selectable dots but whenever I would click on them it would execute the code for every single dot in the cube. To counter this, I deleted the prefab and had to figure out how to excute the code on a dot that has been hit by the raycast, this brought me to Unity Answers. Hopefully come Monday or Tuesday I'll be able to get to college and try out the code the friendly community users have suggested and get my spawning properly working.
http://answers.unity3d.com/questions/409005/if-raycast-hits-me.html
Finished the Info Page!
Today I've managed to finally complete the Information page. Overall, it consists of one button for the escape to Menu Page and various gui.layout text-areas and boxs for information and images. Here is the finished product below(all with Gui.Layout; meaning no gameobjects needing to be rendered!).
(19/02/2013) At this moment I am currently wavering on whether to use the built in GUIStyle that Unity offers with it's CSS like transformation tools or to import my own images that I've created with Photoshop. This link details the various properties that GUIStyle offers, at a first glance it does seem attractive however at this stage I am worrying about button animation (e.g. drop-shadow). If I am unable to re-create the desired affects I want in order to display a Button being selected and clicked on, I may have to resort to importing images for each instance. Additionally, my design requires for part of the button to have a large circle with an image inside next to the associated text, with the default GUIStyle that Unity offers I don't think that this affect is achieveable.
Click this link to view an image of the Menu with the default button style.
Tomorrow (first day back from Half Term), I'm going to design my Menu in Adobe Photoshop and attempt to import the default buttons into Unity at College.
Screen Resolution
(20/02/2013) It seems that finding some code that can automatically change the size of the controls and GUI layout itself depending on the aspect ratio is quite hard. After trawling around the Unity forums and Google for the good part of a few hours I've finally decided to design my Interactive Sound Toy for a specific resolution. I've chosen this because I do believe Unity doesn't have the capability for major varying screen sizes. I have come across some solutions such as "Orientation Control" however I personally am not willing to dish out money for a Student project that is to be completely in 3 weeks. Another solution I found was using a GUI matrix to rescale it for the screen size. At this moment this method does work in some manners, however I'm not entirely sure how this will function if I was to have a large number of different controls on my screen (e.g. designing the game page).
I think at this stage I'm going to focus on designing for a specific ratio as going too far into changing the resolution is going stray me off my goals and aims for this unit.
(20/02/2013) I think I've finally decided that I'll use percentages to calculate where to position my controls for my GUI. I know that the final product will lack a scaling functionality but by using percentages instead of integers the position of the elements will be some-what relative to the screen none-the-less.
In the video below is a quick scaling text, obviously the controls themself don't scale, but I've managed to keep the positioning of these elements relative to the screen size. The code below the video is what I used to do this.
I have considered using Screen.width and Screen.height in the size however it seems the font size in the stylisation just gets cropped out. Perhaps using a different method of stylisation can mean I can actually scale my controls.
Camera Control
Been researching for the past hour or 2 on various camera rotation scripts, managed to find an improved version of the default one that is supplied in the standard assets. This one not only rotates fully around a gameobject but can also zoom in and out with limitations. However though if I am to match what I've planned I must make the camera only rotate around the object when I'm clicking down the middle mouse button, because I want to use the code that I've found I'm going to use the Unity Forums to ask my question.
A design feature that I may be changing at this stage however is the position of the cube when the level is loaded. It might be better if the GUI buttons were put further to the side and the object itself centered in the middle, this can help with visibility when zooming in and out.
One last script I've added today was something I was researching yesterday, as I was unable to find a desired texture for my background yesterday I'm deciding to use a solid colour. However though this script I've found here, allows me to add a simple gradient to my background.
Spacing out Selectable Dots and Bugs
(27/02/2013) Luckily instead of using Mograph in Cinema 4D and cloning a sphere to be imported into Unity, I've managed to space out multiple sphere gameobjects. By spacing each object by around 0.33 I've been able to fit a 5 by 5 by 5 cube of small spheres. I plan for each one of these spheres to instigate an expanding sphere which I'll hopefully work on tomorrow. However, by placing these spheres inside the cube a bug has occurred which automatically zooms in when I rotate around a certain area, hopefully by consulting with my tutor and the Unity forums tomorrow I can fix this error.
RayCasting
(03/03/2013) A few days ago I managed to place all of the Selectable dots around my cube in the manner that was most desired. However though the colliders for the Holding Box stopped the user from clicking on one of the dots to spawn a sphere, therefore I needed to use raycasting. At first I managed to easily write out a Raycast and draw it so I could see where it was going although at first it was still hitting the box before the dots. I therefore put the Holding Box on a different layer and told the raycast to ignore all gameobjects that are on that specific layer. The code I used is in the image below.
Til this date I've managed to use a raycast to get through the box and reach all of the 125 dots successfully, baring in mind though other issues have occured. For example I was originally using a prefab for each of the selectable dots but whenever I would click on them it would execute the code for every single dot in the cube. To counter this, I deleted the prefab and had to figure out how to excute the code on a dot that has been hit by the raycast, this brought me to Unity Answers. Hopefully come Monday or Tuesday I'll be able to get to college and try out the code the friendly community users have suggested and get my spawning properly working.
http://answers.unity3d.com/questions/409005/if-raycast-hits-me.html
Finished the Info Page!
Today I've managed to finally complete the Information page. Overall, it consists of one button for the escape to Menu Page and various gui.layout text-areas and boxs for information and images. Here is the finished product below(all with Gui.Layout; meaning no gameobjects needing to be rendered!).
Unity Tutorials
Unity Gui Tutorial Completed ✔
Unity Scripting Tutorial Completed ✔
Unity Title Screen Tutorial Completed ✔
Sunday, 17 February 2013
Planning Interactive 3D Sound Toy
Influences
Pulsate is an Interactive 2D Sound Toy consisting of ever expanding or contracting orange circles, which when interact with one another create a sound. Basically Pulsate gives the user the ability to create random synthesised tunes through the medium of an Interactive Toy. With its simple ingame interface and the purposeful sound manipulation tools, Pulsate is a great toy for people who are quite new to creating simple music.
(18/02/2013) Even after the extensive planning for my Interactive Toy I do feel that it lacks in the Sound Manipulation department. If I do have extra time to work on my Interactive Toy or if I transfer it over to my Final Major Project, I will aim to make "BallBox" a 3D experience that builds upon what Pulsate already offers. Additionally I am hoping to adapt the simplistic usage of interface elements that can easily explain what functions such as "Sliders" manipulate with none or little text information required.
Click the link below to play the web version of Pulsate.
http://lab.andre-michelle.com/swf/fl10/pulsate.swf
Sketches
Paper Prototype
Flowchart
Pulsate is an Interactive 2D Sound Toy consisting of ever expanding or contracting orange circles, which when interact with one another create a sound. Basically Pulsate gives the user the ability to create random synthesised tunes through the medium of an Interactive Toy. With its simple ingame interface and the purposeful sound manipulation tools, Pulsate is a great toy for people who are quite new to creating simple music.
Click the link below to play the web version of Pulsate.
http://lab.andre-michelle.com/swf/fl10/pulsate.swf
Sketches
Paper Prototype
Wireframe

Thursday, 20 December 2012
Sound Research
Components of Sound
When an object vibrates it creates a disturbance that
travels through whichever medium it is adjacent to, in terms of Sound waves
this is air. Basically sounds are pressure waves of air, if there wasn’t any
air there wouldn’t be a medium for sound to travel across. To describe how we
hear air, we can take the example of a person clapping. When a person claps the
air between their hands is pushed aside as they get closer together. The
quicker their hands come together, the higher the force and the faster the air
molecules are pushed away from their hands. These air molecules then travel in
all directions away from the two hands clapping; when this pressure wave comes
into contact with our ear drum we hear the sound.
In order for us as designers to use sound in our digital
productions, we need to understand the basic main components to which we can
manipulate; Intensity, Pitch, Tone and Duration.
Sound waves have a number of properties, in terms of
intensity this is the amount of energy the sound has over a certain area
(energy being measured as Amplitude). From this we can deduce that the more
energy a Sound wave has the higher its amplitude is, therefore the more intense
it is. In order to use sound intensity in digital media productions, we can
either increase or decrease the amount of decibels it has; in simple terms we
are effectively changing the loudness of the sound.
Pitch is what is used to distinguish sounds of the same note
but with different octaves. In scientific terms, pitch is dependant upon the
frequency of a sound wave. The more wavelengths there are within a certain unit
of the time, the higher the frequency it has. In order to measure Pitch we use
hertz and in order for us to hear sounds, the pitch must be between 20 to 20,000
hertz.
Tone is really all about the vibration and the quality of the sound itself. To explain, tone describes the difference between two sounds played in the same pitch and volume, for example the tone created by one beginner violinist is very much different to a whole symphony of violinists. In more detail when a sound source vibrates it has multiple frequencies, each one of these produces a wave. Sound quality depends on the combination of different frequencies of sound waves.
Tone is really all about the vibration and the quality of the sound itself. To explain, tone describes the difference between two sounds played in the same pitch and volume, for example the tone created by one beginner violinist is very much different to a whole symphony of violinists. In more detail when a sound source vibrates it has multiple frequencies, each one of these produces a wave. Sound quality depends on the combination of different frequencies of sound waves.
The last factor to consider when manipulating sound is the
speed of it. For example, by changing the playback speed of a sound we can make
an explosion sound like a gunshot or a voice sound like a cartoon chipmunk
character. Playback can also be manipulated to make sounds more sudden or more
subtle, for example slowing down audio can introduce us to other elements that
we may have not noticed. Take the example of the water in this video http://youtu.be/j_OyHUqIIOU?t=2m34s
One of the most highly noted debates in Sound Design is the
usage of Analogue against Digital and vise versa. In reality there is a huge
difference between Analogue and Digital sounds, for the most part of the 20th
century the usage of Analogue was main-stream but in the 21st
Century Digital sound has been much preferred to the producer and consumer.
A Sound wave that is recorded or used in its original form is known as Analogue. For example, the signal that is taken from a microphone inside a tape recorder is physically laid onto tape. Other example of Analogue Sound being recorded can be taken in the form of grooves (vinyl) or magnetic impulses. The other alternative to recording sound is Digital, Digital sound is produced by converting the physical properties of the original sound into a sequence of numbers which can then be stored and read back for reproduction. The process of recording digitally is taking an analogue wave, sampling it and then converting it into numbers so that it can be stored easily on a digital device (e.g. CD, iPod, Computer, etc).
A Sound wave that is recorded or used in its original form is known as Analogue. For example, the signal that is taken from a microphone inside a tape recorder is physically laid onto tape. Other example of Analogue Sound being recorded can be taken in the form of grooves (vinyl) or magnetic impulses. The other alternative to recording sound is Digital, Digital sound is produced by converting the physical properties of the original sound into a sequence of numbers which can then be stored and read back for reproduction. The process of recording digitally is taking an analogue wave, sampling it and then converting it into numbers so that it can be stored easily on a digital device (e.g. CD, iPod, Computer, etc).
The main advantages to using Analogue Sound is that it’s
considered a more accurate representation of sound and that because its been
physically recorded its in its natural, original form. By using Analogue you’re
basically getting what you’re hearing with no deficiencies or other
manipulations, it has been physically replicated. The deficiency in quality we
can get with Analogue sound is if it’s been over-played to the extent where we
permanently lose the quality. Due to the limitations in editing analogue sound
we cannot easily retrieve sound if the device that it’s been recorded on is
damaged. Additionally the physical element to analogue sound means we need to
rewind it in order to listen to a specific part of a sound, instead of just
skipping to a certain stage. Lastly, because Analogue sound records everything
in its input vicinity, we can also get unwanted audio such as noise or clipping
at certain frequencies
Interactive Media
Products
There are many different directions of which we can take
audio in Interactive Media projects, from sound-based games to button clicked
events Sound can be both informative and emotion conveying.
A good example of sound being used to emphasize atmosphere is within Computer games. Even though Computer games are mainly graphical, sound/music can be used to add atmosphere to the game whilst also strengthening the usage of the initial graphics.
A good example of sound being used to emphasize atmosphere is within Computer games. Even though Computer games are mainly graphical, sound/music can be used to add atmosphere to the game whilst also strengthening the usage of the initial graphics.
Although increasingly within the past 10 years or so there
has been an increase in the usage of Sound over graphics in Interactive
Products such as video games. This prioritizing of Sound over Graphics has lead
the user to using their imagination and associations in giving what they see on
the screen a lot more texture and depth to what we might first think. For
example, a 2D square being dropped from a height can have a lot more weight and
resistance if a smashing or thumping sound was applied to it as well. With
using Sound as a base to video games the entertainment value of the game can
skyrocket as the developers are making the user gain a unique picture of what
they perceive. Take the example of Bit.Trip.Runner, in this game the user is
rewarded not just with points at the end of the level but with the music that
they achieve during it. By collecting all the red crosses and gold pieces the
user is adding certain tones and flavors to what they’re initially hearing. An
example of Bit.Trip.Runner can be seen in the video below.
Even though Sound is an amazing tool to use in video games it doesn’t come
without it’s drawbacks. In order to add sound to an interactive project the
designer needs to deal with speed problems such as low bandwidth internet
connection or hardware limitations in processing speed, memory and specific
storage media. It is these types of restrictions that have made designers
hesitant to use sound and music to a great extent because of the possibility of
it not working for a majority of users. This is why in today’s media sound is
mainly used as a support to the graphics rather than a fundamental component.
As well as music and tones, speech is also a fundamental
tool for storytelling with voice-overs and narration. Because with narration
there is no physical emotion, the types of characters are very voice driven and
most of the time distinctive in the way they speak and the vocabulary they use.
Taking the example of a recent popular Indie game called “Bastion”, the player
hears a voice which guides them through the story and explains details which
are more entertaining and efficient when explained through audio. Watch the
video below from 1 minute and onwards, there we have a very distinctive
sounding narrator who explains what the character is doing, where he is going
and what is going on around him. This is an example of a disembodied voice,
basically the tonal characteristics and emotions are far more highlighted than
if the character himself had a physical representation, it leads the player to
his/her imagination.
Outside the world of video games sound is also very useful for applications on mobile devices or consoles. For example, an Interface can communicate to the user not just by the animation it does if a button is clicked on but by also the sound it plays when selected. It’s this usage of key tones that can create a positive progression/connection with the user as this sound is signifying that the person is going forward or beyond where they were previously browsing. In more detail, it’s quite a common practice for developers to code in high tones for when a user is going into a menu/area and low tones for when they’re exiting one. It’s this type of technique that personifies the action and creates an emotion that can provoke a feeling of ‘discovery’ or ‘excitement’, therefore creating a richer experience.
Outside the world of video games sound is also very useful for applications on mobile devices or consoles. For example, an Interface can communicate to the user not just by the animation it does if a button is clicked on but by also the sound it plays when selected. It’s this usage of key tones that can create a positive progression/connection with the user as this sound is signifying that the person is going forward or beyond where they were previously browsing. In more detail, it’s quite a common practice for developers to code in high tones for when a user is going into a menu/area and low tones for when they’re exiting one. It’s this type of technique that personifies the action and creates an emotion that can provoke a feeling of ‘discovery’ or ‘excitement’, therefore creating a richer experience.
In summary, we can structure out various sounds used in
Interactive Media products with Diegesis. Diegetic Sounds being part of an
action or what we might see (visual effects for Bit.Trip.Runner), or
Non-Diegetic sounds which are sounds that aren’t present within the scene
(voice-over for Bastion). A mixture of both can be referred to as a soundscape
which is a compilation of sounds (layered upon one another) that try to capture emotion or emergence, for
example the introductory sound when somebody boots up a Windows Computer or a
mobile device.
Functions of Sound
Beyond Interactive Media Sound can be used for a whole range of purposes. Mainly, Sound is used to improve an experience, for example when watching horror movies gloomy music/tones can help build up the atmosphere and sustain it. On the contrary to a gloomy score, cheery and upbeat music can invoke positive feelings such as happiness or joy. This usage of sound can be commonly known as Background Music, music that serves the purpose of creating an atmosphere or portraying the feelings of what a character should be feeling. Additionally as well as emotion invoking, background music can be used to express a passage of time, this can be known as a montage. This vimeo link shows a scene from “Breaking Bad”, basically 2 Methamphetamine cooks are producing a lot of product and are making a lot of money overtime. Specifically around 3:15 we’re shown a few shots that show our Protagonist (or anti-hero), Walter White, signs of boredom or frustration.
Another purpose of Sound can be Effects. Previously in Sound
for Interactive Media Products I spoke of transitions or events for mobile
devices or applications. Generally Sound effects can be used for more than just
interactive products, they can be used for events in video games (gunshot),
explosions in movies and other types of media. Without sound effects the
developers would be drawing away the player from a sense of realism that
they’re trying to invoke. Additionally sound effects can also be used to notify
a person that an event if occurring, for example websites such as Facebook
catch the attention of the user when somebody messages them. When you’re
messaged on Facebook a sharp, loud but quick noise is played so that the person
on the computer can definitely register that they have been contacted. These
types of noises specifically are known as prompt noises, they’re mainly used in
Interactive media to communicate with the user and register events.
Sound can also be used to emphasize a person’s sensory
experience such as lighting, smell, comfort, etc; this can be known as
Ambience. Ambience is effectively the noise of the environment; generally it’s
used to set a realistic scene by grounding the user in a certain location and
situation. By example, Ambience tends to be natural-sounding effects such as
the cacophony of crickets and the leaves at night or the crashing of tidal
waves during a stormy English afternoon on the beach. Like Background music,
Ambience is a more subtle effect of setting and sustaining the scene.
Within Interactive Media there is also voice recognition,
basically somebody with the right application can use their voice to
communicate with and control a certain device. For example with iPhones there
is a program called Siri which recognizes certain voice patterns and tones and
is able to communicate back and activate certain parts of your mobile device
accordingly. This type of sound is generally used to inform the user of events
going on around them rather than setting an emotional scene.
Annotate sounds in programs such as Logic or Audacity.
Collect a variety of different sound assets for a Library.
Apply effects which prove Creativity and Flair (Layered sounds, effects)
Final marks on the Sound will be applying it to objects within Unity.
Core Principles of Interaction/Interface Design
(Consistency, Visibility, Learnability, Predictability and Feedback)
Often abbreviated as IxD, Interaction Design is the practice of creating a digital system’s structure and behavior. The key goal when creating these Interactive Systems is for the product to develop a meaningful relationship with its user, be it a Personal Computer or a Mobile Device. Quite often enough the design of these systems centers on “embedding information technology into the ambient social complexities of the physical world” (Malcolm McCullough). In comparison to a scientific/engineering route, Interaction design focuses more on satisfying the needs/desires of the user than the desires of the technical stakeholders for a project.
Often abbreviated as IxD, Interaction Design is the practice of creating a digital system’s structure and behavior. The key goal when creating these Interactive Systems is for the product to develop a meaningful relationship with its user, be it a Personal Computer or a Mobile Device. Quite often enough the design of these systems centers on “embedding information technology into the ambient social complexities of the physical world” (Malcolm McCullough). In comparison to a scientific/engineering route, Interaction design focuses more on satisfying the needs/desires of the user than the desires of the technical stakeholders for a project.
In order for most Interactive Designs to function there must
be a User Interface Design, this is the medium that the consumer and product
uses to communicate with one another. Whether it is a mobile device, appliance
or otherwise, all interactive designs need to have an input/output design that
is simple and efficient; this is the main focus of Interface design. In further
detail, UID aims to enhance the experience of the consumer through usability,
efficiency, ease of memorization, reliability, user friendliness and
usefulness.
There are many principles towards the design of Interactive
systems, one very importantly being consistency. People are extremely sensitive
to change so it’s ideal when designing an interactive product to keep
Consistency in mind. What Interactive Designers don’t want to aim for when
creating a product are features that jerk the experience of the user which
makes them ask questions such as “Why is this different” or “Why isn’t that the
same?” In order for these situations not to occur we need the user to gain a
sense of familiarity that makes them feel comfortable when using an interactive
product. This sense can be developed through the user continuously interacting
with the product and learning how it works.
For example, recently Youtube changed their layout with the most noticeable differences being the positioning of the website itself within the browser and how the homepage defaults to “What to Watch” instead of uploads only from the user’s subscriptions. When this update went worldwide most people didn’t like the change because it required them to re-learn what they were already comfortable with. Updates like these are a big deal to the consumers as they now have to do more “work” in order to access the content they desire, such as clicking on an extra page or focusing on differences instead of the actual content given.
For example, recently Youtube changed their layout with the most noticeable differences being the positioning of the website itself within the browser and how the homepage defaults to “What to Watch” instead of uploads only from the user’s subscriptions. When this update went worldwide most people didn’t like the change because it required them to re-learn what they were already comfortable with. Updates like these are a big deal to the consumers as they now have to do more “work” in order to access the content they desire, such as clicking on an extra page or focusing on differences instead of the actual content given.
Additionally if people want to learn how a certain product
works then its components must all relate. For example if the elements on a
website all behave alike to one another then they should also look very
similar. Keeping consistency within Interactive interfaces is important as the
user can gain a sense of expectation with how the product can behave in the
future. Simple changes in format (font, size, iconography, positioning, etc)
can completely change the experience of the user and deter them from using a
certain product that doesn’t stay the same in appearance. Effectively being
inconsistent can change the learning curve dramatically.
The factor of Visibility is extremely important to
Interaction Design as most interfaces are inheritably visual, in order for the
user to engage with the interactivity available visual cues must be present in
the product. If these certain cues were not present within the Interactive
system then we decrease the validity of the very fact that the product itself
is “Interactive” as the user is ignorant to the opportunity. Visual cues must
be easy to find and recognize, discovering a certain component is interactive
shouldn’t have to be dependent on luck or chance. The only occurrence where
we’d want the user to search for something and experience trial and error is
through Games or Easter eggs, these are an exception in the rules of visibility
as failing in this environment is part of the experience. The most common
method a user realizes that an element is interactive is if they rollover the
subject itself, therefore overlays, transitions and other effects alike are
vital to noticing Interactivity exists. In some cases visual cues need to be
emphasized so that the user will move their cursor towards the subject.
Take the example of video on a webpage, without the play
button and controls anybody could see the media as just an image. It’s these
types of cues that actually communicate with the user without actually having
to display any text on how to interact with it or what it actually is.
When people use Interactive products (especially touch
screens) they are usually very “click-happy”, using standard interface
components such as hyperlinks, thumbnails, buttons, etc invites the user to
click onto certain elements. If we were to stylize an element as if it was part
of a block of text then we may not be able to recognize important interactive
features such as hyperlinks or spoilers. This is why when designing Interactive
systems text styles are needed to be categorized visually depending on what
function they serve.
False belief is a key problem when designing Interactive
interfaces, we don’t want the user to believe they’ve seen it all when in fact
they’re actually missing more content that’s available to them. For example,
neatly displaying information on a webpage can actually make the user believe
that there is no information to scroll down to, hinting a false-bottom is
actually a cue for the user to focus on that area and look for more info.
False-bottoms can either be made up of very specific notifications such as an
arrow in the sidebar with associated text or subtly enough an image that is
partially cropped out due to the size of the canvas.
To tie in with the function of Consistency we also have Learnability, Interactive Systems need to be easy to learn and remember. Whether the user is visiting a page or re-visiting the device itself the user needs to remember what they have learned every time they interact with the system. Ideally the designers would aim for the person to use the system once and remember it from that experience, however practically it takes a few more times for the user to get a grip of how the system behaves in order to eventually use it efficiently.
Next we have Predictability; meaning what will happen before the interaction occurs should have precise expectations set by the Interactive System. The basic method to set these expectations is for us to analyze visitor behavior which can then reveal how well they are able to predict what will happen. A poor level of predictability can be seen when a user is randomly clicking around on elements expecting for something to happen, here they are unable to grasp a sense of what they might expect or even what they are interacting with. In comparison a user who is focused on the job in hand and understands their environment will be using the system efficiently as they will know where to click, what the outcome would be from that and would be able to accomplish their goal. If the correct expectations are set the ability to predict the outcome can be easily established.
To tie in with the function of Consistency we also have Learnability, Interactive Systems need to be easy to learn and remember. Whether the user is visiting a page or re-visiting the device itself the user needs to remember what they have learned every time they interact with the system. Ideally the designers would aim for the person to use the system once and remember it from that experience, however practically it takes a few more times for the user to get a grip of how the system behaves in order to eventually use it efficiently.
To go into more depth we can use Learning theories from
Psychology to better our understanding of how people acquire and retain
knowledge and skills. For example, we can look at Operant Conditioning and see
how getting a reward or positive feedback increases the probability that the
user will use the system and repeat a certain behavior, whilst receiving a
punishment or making an error decreases this probability. Additionally we can
also look towards Observational Learning which involves using someone to model
a certain behavior that we want replicated and repeated by other people. In
relation to Interactive Media, O.L. can tell us a lot about how people are
learning to interact with devices from the behavior of role models. A very
simple example of Observational Learning is video tutorials as a person is
setting an example of how a certain system should be used and then people will
replicate that method.
The key factor to Learnability is the usage of practice with
devices as this can lead to users picking up habits or even automaticity
(requiring less occupation of the mind). Basically the more we use an
interface, the more we learn it and the easier it becomes to use it in the
future.
By taking a look at the graph to the right we can see that when
a person first uses a device they’re considered quite a novice at the
experience and the rate for making errors is quite high. Overtime though we see
that with more practice the user is considered more of an expert at the device and
the probability of making errors is has vastly decreased which overall has led
to an improved performance. As designers we want to make something that is easy
to learn so that the user doesn’t have a steep learning curve.
Another factor to consider with Learnability is how people
learn behaviors from experiences of real-world places and objects which can
then aid them in using Interactive Systems. In more depth as Designers we can
take advantages of what people already know by targeting their transfer of
learning and their perceived affordances. How people re-apply experiences in
similar situations and how we can use the functions of real-life objects
metaphorically or in digital form on an Interactive System. Effectively (with
perceived affordances) we are taking the questions we ask with real-life
objects such as “How am I going to interact with it?” “What is it going to do?”
“Can I lift, press or turn this?” and drawing a digital equivalent towards it
when designing Interactive systems. For some comparisons analyze the image above and see how buttons, sliders and tabs have been translated because of
their affordances into what we see digitally with their perceived affordance.Next we have Predictability; meaning what will happen before the interaction occurs should have precise expectations set by the Interactive System. The basic method to set these expectations is for us to analyze visitor behavior which can then reveal how well they are able to predict what will happen. A poor level of predictability can be seen when a user is randomly clicking around on elements expecting for something to happen, here they are unable to grasp a sense of what they might expect or even what they are interacting with. In comparison a user who is focused on the job in hand and understands their environment will be using the system efficiently as they will know where to click, what the outcome would be from that and would be able to accomplish their goal. If the correct expectations are set the ability to predict the outcome can be easily established.
The usage of previews can help set the expectations for
users and even show the constraints of what they are able to achieve with
complex or new interactions. For example, when an interface is in the process
of loading a preview can be shown of what the content is going to be, how it
behaves and how a user should interact with it. Using these taster-like
interfaces is great for when a person is waiting for the main content to load,
so that they can not just understand/predict what’s going to happen but neither
are they bored in the process. Another example of a preview can be when a game
level loads the map can be shown for the specific area with even markers on it
to show the goals for the user.
Other features such as labels, instructions, icons and
images can also be used to set the expectations of how a system might function.
All these various methods can help us communicate what we want to tell the user
about the Interactive system. Specifically we can tell the user what it can do,
what will happen, where the person will go next and how the system will respond
to certain actions. By developing certain patterns and consistencies we can
match a user’s prior experience/expectations.
Lastly we have Feedback; this is an essential factor towards Interactive Design as it can provide information about the progress of functions, the location, future possibilities and whether or not something has finished. In terms of visitor experience feedback is used to support what somebody is doing instead of interrupting them and complicating things. An example of feedback can be the “Undo” function that is present on a number of editing software. This function allows the reversal of choices in order to correct mistakes or in the event of an error where one can revert back to a prior state.
Lastly we have Feedback; this is an essential factor towards Interactive Design as it can provide information about the progress of functions, the location, future possibilities and whether or not something has finished. In terms of visitor experience feedback is used to support what somebody is doing instead of interrupting them and complicating things. An example of feedback can be the “Undo” function that is present on a number of editing software. This function allows the reversal of choices in order to correct mistakes or in the event of an error where one can revert back to a prior state.
Another key example of feedback can be stating the progress
of functions such as downloads or loading bars. The image below shows us a
Steam Library interface that displays how much has been downloaded, how much
time remains, when the download started, what is the current rate of the
download and how much needs to be downloaded overall. This dialog is giving us
multiple cues about what is happening, what you are getting, how long it will
take and that you have the option to get out of it.
Acknowledgement of actions is also a key element to
feedback, in order to have a successful interactive system every interaction
should produce a noticeable and understandable reaction. It is imperative for
the system to tell the user that their action has been noted otherwise it can
lead to the repetition of that action or even further errors/mistakes. The
system needs to respond to an action, this response can range from a button
animation pressing down to a character from the input of a keyboard being printed
out on a word document, etc. For more complex processes such as loading bars we
can use accurate/inaccurate indicators. The image to the right shows us some of
the different types of indicators that are used in Interactive systems, ones
that can show us the size/duration of a certain function (e.g. downloads) and
ones that do not (e.g. buffering).
To summarize, these core principles do not all fit into
separate categories, many of them overlap/interrelate with one another. The
image to the right shows how these various factors all affect one another and
are essential to the whole process of Interaction Design. Firstly, visibility
displays certain interactions and when their outcomes are noticed they can be
accurately predicted and people will use the interface. From there if
meaningful feedback is given after the user activates an interaction
people can understand how their actions lead to the outcomes. When the feedback
is understood they start to learn how the interface works, with
continuous practice, reinforcement and observation people’s learning of the
interface becomes stronger. Once they have learned how the interface functions
they are able to transfer that knowledge to similar interfaces (e.g. webpages
with the same software). As long as the interfaces within similar systems (or
the same system) are consistent, people will be able to apply what they
have learned and interact more efficiently and effectively.
Subscribe to:
Posts (Atom)