Mapping Data to Senses

Hi Students,

Data visualization leverages the same cognitive processing system that evolved to spot savanna cats skulking in tall grass, recognize emotions in other human faces, and distinguish between food that is and is not safe to eat. We’ve evolved to perceive the world, and as primates, a lot of that perception is visual. The visual system is incredibly fast and remarkably accurate. Noah Iliinsky provides a good, high-level description of the visual processing center in his article, “Why is Data Visualization So Hot“:

… fundamentally, our visual system is extremely well built for visual analysis. There’s a huge amount of data coming into your brain through your eyes; the optic nerve is a very big pipe, and it sends data to your brain very quickly (one study estimates the transmission speed of the optic nerve at around 9Mb/sec). Once that data arrives at the brain, it’s rapidly processed by sophisticated software that’s extremely good at tasks such as edge detection, shape recognition, and pattern matching.

How do we go from dodging danger to comprehending charts? First, we need to understand how photons get turned into shapes and color; how we group objects together based on visual cues; how we understand motion. Scientists have been studying the visual system for a long time, and while there is still more to learn, we know a lot.

We know that unconscious, low-level processing in the visual system quickly groups objects by properties such as color, size, and shape. We know that when we focus on one part of an image or a movie, we can be blind to other changes. We know that our brain fills in information, creating shapes where none exist.

How do we map that knowledge onto data? As we discussed earlier this semester, there are types of measurement: nominal, ordinal, interval, and ratio. Most data that you are interested in will be one of those four types. Mapping human capability to these levels of measurements is the key to visualizing data. For example, we can easily distinguish between the colors blue and red (at least most sighted people can). However, blue and red don’t have a natural ordering. There is no reason to think that something colored red is worth more or greater than something colored blue. Color is good at distinguishing members in a group, otherwise known as nominal measurements, but would be a poor choice for differentiating ordered elements, or ordinal measurements. For ordinal measurements, shades of grey work well. Shades of grey are easy to distinguish and have a natural ordering.

Data visualization relies on vision, but vision is just one sense. We have four others. Students, I want you to think about taste, touch, smell, and hearing. Can you create a “visualization” system for people who can’t see? I want you think about your senses. Pick a non-visual sense and do some research. Figure out what is and isn’t easy to distinguish within the boundaries of that sensory data, and create your own mapping from sensory experience to type of measurement.

67 comments

  1. Perfume is the first thing that comes to my mind when I think of odor visualization. When we are buying and testing a perfume, we smelled in the perfume, closed our eyes and sometimes we will have a picture in front of us (if it’s a very good perfume). So actually, we have been playing with odor to change how we feel about the odor by changing the underlying chemicals/ingredients for the product (perfume in this case) for a long time.

    But this is a very ambiguous area that lacks certain standard measurement for comparison and hence its application is still very limited even today. One might think Chanel No.5 made her feel like she’s in Paris, walking on Champs Elysees, but the other may find it made her dizzy. So even though human has very similar ability to smell things physically, odor visualization is very difficult.

    I came across an interesting study on the internet: Smell-Visualizing Colorimeter Can Fingerprint Coffee Aromas and Toxic Gases. (http://www.popsci.com/technology/article/2010-02/colorimeter-can-fingerprint-coffee-aromas-and-toxic-gases) The core of this product is the colorimeter which is a card with tiny polymer film squares that hold 36 designer dye drops. Each dye pigment changes color when exposed to certain chemicals and the combination of the 36 exposes a unique chemical fingerprint for certain aromas. I found this a very clever way to map sensory experience to standard measurement. We may have a training data of certain color combination by generating different kinds of smell from sweet to toxic etc. Then each time we have a new test data (smell fingerprint) we can do a classification on the smell and measure the “level” of the smell. The detailed research is from a chemist from UIUC (http://www.scs.illinois.edu/~suslick/documents/nature00710.pdf)

  2. One thing that comes to mind when I think of mapping one sence to another are those individuals who experience synesthesia (http://en.wikipedia.org/wiki/Synesthesia), which is essentally when once pathway in your brain maps to another pathway. These are the individuals who can taste sounds, or see words and numbers as colors.

    An example is an individual SC (http://www.ncbi.nlm.nih.gov/pubmed/24167497). For SC, sost words and a subset of non-linguistic sounds induce the experience of taste, smell and physical sensations. I mean how cool is that?

    I feel that doing some kind of visualization based on these individuals would be beneficial interms of finding patters in data, it is another layer which we can do analysis and find some hidden patterns. Studying these individals also give insite into how our brain works. Just like how Victoria mentioned about using a sense as games, we could incorporate these additional sensations when we are in an augmented reality/simulation/game. We’d be able to experience the world these indiduals do.

    It would be fun to create a game in this mannar. The stroop test is a ‘test’ that times your reaction to a particular shape or color. However as you go through the examples, sometimes these shapes and words clash. For example, you might be presented with the word ‘blue’ but the letters are colored in red, but have to respond with the correct word. It’s a fun experiment and really shows how some visuals work together and how others clash.

    Visuals lie heavily on vision, so how do we clearly represent something for those individuals who are visually impaired? The most ‘obvious’ way would be to invoke the individua’s sense of touch:

    http://dots.physics.orst.edu/publications/visual.html

    shows a their approach to convert a visualization to a computer which then translates it to braille. With our eyes, we can take a scatterplot and color the points depending on a categorical variable. How do we accomplish this with braille? would we insead create separate plots for each color (like how we would do facets in R)? or write 1000 words for the 1 visual?

  3. As humans we receive input from all five of their senses (sight, touch, hearing, smell, taste), but we tend to receive significantly more information from vision than any of the other four. A new trend in data visualization is layering other senses to “visualize or sensualize” your data. I feel like the next sense we use most understand information is by sound.The idea of visualizing data through sound is appealing and particularly easy because similar to understanding data through colors, shapes, and size, we can view or sense data by varying pitch, frequency, and volume. There are also different instruments that can serve as different vehicle where the data can be convey.
    And it would seem that there is abundant information of Sound Data Visualization.
    For example if you look at the website http://money.cnn.com/markets/storysupplement/dow_piano/ You can hear how the stock market sounded in 2010. It might be interesting to try and engage other senses such as smell (when the stock market was stinking), or heat ( to emphasize the market was heating up), or taste. Here are some additional links for audio data visualization: http://datavisualization.ch/showcases/sound-mapping-in-new-york-city/
    http://osocio.org/message/data_visualization_in_sound_a_song_of_our_warming_planet/

  4. Michael Brevetti · · Reply

    The concept of “visualizations” incorporating other senses is interesting. While taste and smell are certainly senses that can be used to enhance certain experiences – anyone else remember scratch and sniff?(http://en.wikipedia.org/wiki/Scratch_and_sniff) – they are at a disadvantage to hearing and touch, since different degrees of taste and smell tend to be more opinion-based than fact-based. While many will agree the taste of chocolate is pleasing, there would more disagreement about how pleasurable the taste of a jalapeno.

    On the other hand, sounds of different pitch, tempo, or volume could foreseeably be used to convey meaningful differences in data. For instance, tempo could be used to communicate distance, with a faster tempo meaning closer, slower meaning further. Based on some basic research, this concept is called sonification. An interesting article published in the Journal of Experimental Psychology in 2002, regarding experiments conducted on this topic, can be found here: http://sonify.psych.gatech.edu/publications/pdfs/2002JEPA-Walker.pdf

  5. In data visualization, sight is taken for granted for most of the time and in few instances the subject of mapping for the other senses is approached. After reading the bog post, I was curious of the actual advances in technology in bringing visualization to the visually impaired. And to my surprise, a few people are struggling with the same question and are making small and consistent strides in trying to understand the phenomena, improve and innovate in the field. In this article (http://dots.physics.orst.edu/publications/visual.html ), the authors mention a interesting concept of reading a map with audio/tactile methods on a tablet: the person runs his fingers on a map and as he reaches an intersection a audio prompt is giving him information about the location. The next article (http://www.ncbi.nlm.nih.gov/pubmed/12182801 ) refers to a series of experiments where they used non-speech sounds and haptics in order to increase the interaction with a visualization. I also found this interesting article (http://link.springer.com/chapter/10.1007%2F978-3-642-39188-0_47 ) about designing a weather map for blind users. This article praises the benefits of using auditory feedback in combination with a touch-based device (as a tablet).
    I am noticing a common thread of knowledge based on what read thus far: the alternative with the highest potential for success is at the intersection of touch and hearing. The successful combination of a touch-based device and intuitive auditory cues to guide the user thought his visualization experience is the most likely strategy for development in the field.

  6. While pondering how one might go about creating a visualization system for individuals who can’t see, my mind immediately wandered back to some public comments around product accessibility for the disabled given by one of Google’s executives. General usability is typically something we take for granted in technology and it’s actually incredibly challenging to create products that can be easily used and understood by individuals who lack the ability to see.
    When we think about the underlying intent of data visualization, it seems to be more about enabling users to gain value from the visual as opposed to actually being able to see it - visual learning is just one way we can process that information. There are many others. I found it easier to use a specific example when trying to address non-visual “mapping”, so I envisioned how you might successfully communicate a building map to someone who was blind such that they could navigate their way around.
    In Neuroscience there has been a lot of research focused on understanding how visually impaired individual’s obtain “cognitive map knowledge” (understand geographical space) information without seeing. There is a lot of research which suggests both auditory and tactile cues are useful in helping people create mental maps of geographical space so my “visualization” system incorporates both of these items. Imagine the end user has a cell phone that contains a pre-loaded map of a building (we’ll pretend Google Maps Indoors (http://maps.google.com/help/maps/indoormaps/) is ubiquitous. From where the user is standing (signaled by GPS), they receive real time auditory feedback from the device which gives them context as to how close or far away they are from objects of interest (and physical barriers). The closer the user gets to an object, the higher frequency the auditory feedback they get is. Think of this similar to sonar. The cochlear system is very good at recognizing variations in sound waves so using auditory frequency as an indicator for how close someone is to an object seems like it would work out well. In order to differentiate from an object of interest or a barrier, different pitches could be used.
    The idea seems a little but far fetched, but I think the general thought would be this will help individual’s learn the layout of a building of interest, such that they can create their own mental map which they will then use to navigate instead of relying on the device 100% of the time.

    See - http://people.ucalgary.ca/~rjacobso/web/publications/JVIB97printed.PDF & http://news.cnet.com/8301-11386_3-57577323-76/audio-based-virtual-gaming-aims-to-help-the-blind-navigate/

  7. Josh Kraut · · Reply

    While pondering how one might go about creating a visualization system for individuals who can’t see, my mind immediately wandered back to some public comments around product accessibility for the disabled given by one of Google’s executives. General usability is typically something we take for granted in technology and it’s actually incredibly challenging to create products that can be easily used and understood by individuals who lack the ability to see.
    When we think about the underlying intent of data visualization, it seems to be more about enabling users to gain value from the visual as opposed to actually being able to see it - visual learning is just one way we can process that information. There are many others. I found it easier to use a specific example when trying to address non-visual “mapping”, so I envisioned how you might successfully communicate a building map to someone who was blind such that they could navigate their way around.
    In Neuroscience there has been a lot of research focused on understanding how visually impaired individual’s obtain “cognitive map knowledge” (understand geographical space) information without seeing. There is a lot of research which suggests both auditory and tactile cues are useful in helping people create mental maps of geographical space so my “visualization” system incorporates both of these items. Imagine the end user has a cell phone that contains a pre-loaded map of a building (we’ll pretend Google Maps Indoors (http://maps.google.com/help/maps/indoormaps/) is ubiquitous. From where the user is standing (signaled by GPS), they receive real time auditory feedback from the device which gives them context as to how close or far away they are from objects of interest (and physical barriers). The closer the user gets to an object, the higher frequency the auditory feedback they get is. Think of this similar to sonar. The cochlear system is very good at recognizing variations in sound waves so using auditory frequency as an indicator for how close someone is to an object seems like it would work out well. In order to differentiate from an object of interest or a barrier, different pitches could be used.
    The idea seems a little but far fetched, but I think the general thought would be this will help individual’s learn the layout of a building of interest, such that they can create their own mental map which they will then use to navigate instead of relying on the device 100% of the time.

    See - http://people.ucalgary.ca/~rjacobso/web/publications/JVIB97printed.PDF & http://news.cnet.com/8301-11386_3-57577323-76/audio-based-virtual-gaming-aims-to-help-the-blind-navigate/

  8. Yukai Wang · · Reply

    I think listening is a good way to visualize data for people cannot see.
    On one hand, sound is ordered, so it can express ordinal data. Though people may not good at identify the absolute value of each sound represent, it is not difficult to do comparison.
    On the other hand, as we know that sound contains information of loudness(intensity),pitch and timbre(tone). So this give us a way to “visualize” multi-dimenson data. Particularly, tone can play a role like color in visual system.
    By using hearing system, we can present data in nominal, ordinal. For interval and ratio data, though sound contains those information, it is not easy to get directly. One possible way is by using a machine to interpret to them(though like cheating). Another way is like some comments from above, using 0,1 system to present the data to them. While this may make them deaf…

    http://osocio.org/message/data_visualization_in_sound_a_song_of_our_warming_planet/
    http://blog.chartbeat.com/2013/06/18/sound-and-color-data-as-art/

    http://blog.chartbeat.com/2013/06/18/sound-and-color-data-as-art/
    http://osocio.org/message/data_visualization_in_sound_a_song_of_our_warming_planet/

  9. After sight, the next best tool our ancestors had for avoiding predators (and it’s skills associated with that which are used for making visualizations, such as spotting patterns and noticing sizes) was sound. In fact we use several audible aspects already to convey information. Alarms are an excellent example of this, where we can use evolutionary built in associations (albeit, ones that might not be as linear or simple as many of the visual ones) to convey information. There has been a surprising amount of research into what tones and noise levels evoke what kind of response: (http://www.cpsc.gov/PageFiles/103916/alarm1.pdf)

    One could imagine creating a system where different volumes and pitch levels correspond to kinds of information, however this has several disadvantages. The first is that there is much more variety in people’s ability to recognize pitches. Thus small differences would be easily missed, where with something like a bar chart, even a very small difference is noticeable (although one might argue that this could be an advantage, as the ability to see the slight difference in, for example two bars on a bar chart, could lead people to assume the difference is meaningful when it isn’t). The other issue is that it would be hard to observe two things at once, since humans group sounds together when heard at the same time, so you would have to see one category at a time, similar to looking at one bar of a bar chart at once.

  10. George Tsai · · Reply

    In my opinion touch would be the most suitable sense to use to create a “visualization” system. The main reason for this is the ability to convey relevant differences between one part of the system to another. For example, it is very easy to feel that one part of a graph is larger than the other, like how we can see one bar in a bar chart to be higher than the rest. One can convey the value of these variables through multiple sensations that are related to touch, such as the heat or texture. Another reason is that touch conveys its information one part at a time, making it easier for one to see the details of each part at his or her own pace and to build an “image” in the brain. One downside of this is that it could potential be slower and not be able to present all the information at once unlike with other senses.
    There are now braille displays available on the market: http://www.humanware.com/en-usa/products/blindness/braille_displays. These function by using software that reads the content of a screen, and then sends it to a device where dots are raised. The current use is to display characters through braille, but the technology could easily be expanded to a larger board that uses a variety of dots to convey different information. The use of this system also makes it easier to label the visualization so the user can see what values are represented. This system can constantly readjust itself to the wishes of the user. For example, a time series chart could be represented through the use of dots with different textures represented additional lines. In addition, it could be labeled on the sides of the chart with braille to see what value a dot at a certain point represents.

  11. Inki Sul · · Reply

    The word visualization itself adheres closely to the sense vision. Visualization in other sense is clearly a mis-wording, but makes very much sense to explore the boundaries. Since 3D visualization of data is very much out there and publicly available with even the most basic statistical software around (e.g. Matlab), the next possible sense to “sensualize” data would be the sense of touch. Spatial visualization is one of the most powerful and cognitive sense human possesses, and sometimes touching the sculpture can mimic the power of reading the actual graph through one’s own eyes.

    Along with the advent of 3D printers, performing the procedure described above is very much possible. In the link below shows some awesome-looking graph fresh out of R “printed out” into a sculpture. Another link below even lies on the boundary of art/sculpture and visualization. A bracelet printed out of a 3D printer is based on one year of the weather data in Canberra, and as you can see in the date in the link, this idea is not new at all. Even though the it may take comparatively longer time to touch the visualization than viewing the graphs, 3D printed visualization can be a perfect substitute for the blind.

    References:
    http://www.r-bloggers.com/guest-post-visualizing-data-using-a-3d-printer/
    http://www.shapeways.com/blog/archives/288-weather-bracelet-3d-printed-data-visualization.html

  12. Arpit Gupta · · Reply

    I was watching some tutorials on Neural Networks, and came across this lecture series of Andrew Ng , Stanford. In this he mentions how different senses can be used to function as eyes. Neuroscientists have hypothesized that we see with our brains and not with our eyes, eyes are just an organ that transmit signals to brains which then enable us to see. Within our brain we have visual cortex where the sight information is sent by eyes, other organs send information to their own cortex. Now we can essentially re-map the “wiring” in our brain to attach the nerves coming from tongue to visual cortex instead of somatosensory cortex. Now all the signals from tongue are interpreted as visual signals. They then use a camera to capture the scene, convert the scene to electric signals and transmit these low voltage signals to the tongue via strip attached to tongue. Now whatever the camera sees, the person sees the same , not through his eyes but through his tongue. http://www.ipam.ucla.edu/publications/gss2012/gss2012_10595.pdf

    3D-Heat Maps/Plots could also be exploited. By heat map/plot , I mean to propose literal heat maps, with temperature variations to give a notion of large/small. Suppose we want to convey number of sightings of UFO on US Map, then we can have a map of the United States , with contours, and each state at a different base height, so that a blind person can feel the boundary of state. Next if the UFO sightings are large in some state , then its temperature should be higher , and vice versa. I got the idea from this paper, http://infoscience.epfl.ch/record/101354/files/edutainment07.pdf where they are trying to make a visually impaired person judge shapes and figures by providing a feedback signal in the form of audio response, although such a system could work as they have shown, but has to be slow and can’t be used very effectively and efficiently.

  13. This topic is often covered in advertising, especially in perfume ads and chocolate commercials. They encapsulate the feelings they want you to have for that scent visually. They create a desire inside of you, making you believe that their scent will somehow magically make you rich, fashionable, and beautiful… And always, ALWAYS on semi deserted islands. Desserts, on the other hand, often capture taste by linking it with desire. You never realized how amazing and sensual it could be to just pour chocolate until you’ve seen a Lindor commercial. These ads tap into desire not only to feeling of sweet and delicious but interestingly enough with a sensual twist. The visualizations we have been analyzing for class are just presenting the information we have in a way that’s easier to process and more accessible at the moment. When reading a journal article we don’t have access to the data. So the author uses graphics to reach out to us in that way cause it’s most accessible. By the same token we don’t have access to that piece of chocolate or perfume. But we do have access to the feelings these items might produce. As a budding scholar who hopes to make a career in Marketing, I have no desire to reinvent the wheel as I know a lot if research has gone into these methods (ex. Thoughts and Perceived Taste by Elder and Krishna) but I do hope to build on the knowledge we already have, perhaps by researching more in depth what kinds of feelings arise from these inputs.

  14. Esteban Fajardo · · Reply

    I choose the hearing to be the sense to map the visualization. Is its clear that since we are doing a projection of the visualization, we may loose expressive power and some of the subtleties of the data that we are trying to convey. However, it has been well document that blind people can compensate for the lack of sight with an enhanced hearing ability (Super Powers for the Blind and Deaf http://www.scientificamerican.com/article.cfm?id=superpowers-for-the-blind-and-deaf). Furthermore, research and development has been made to develop devices that permit the perception of space using only sound (Seeing the world by hearing: Virtual Acoustic Space (VAS) a new space perception system for blind people. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1684482&tag=1).

    Thus, I propose to use variables such as pitch, frequency, duration, and loudness to represent the types of measurements. Ordinal data can be easily be represented by a discrete frequency of the emitted sound. Similarly, nominal data can be represented by emitting the exact same sound but in different locations, allowing the subject to difference the signal. Additionally, interval data can be represented by pitch of the sound, providing a continuous measurement without absolute zero. Finally, the volume or loudness can represent ratio data, since this scale has an absolute zero in it and is also continuous.

    This idea is not new and has even been put into use by people who are not blind. The name is sonification and is defined by Wikipedia as “the use of non-speech audio to convey information or perceptualize data” (Wikipedia, http://en.wikipedia.org/wiki/Sonification). One interesting example of this type is the sonification, produced in by two Stanford professors, of an epileptic seizure (What Does an Epileptic Seizure Sound Like? http://blogs.kqed.org/science/2013/10/01/what-does-an-epileptic-seizure-sound-like/).

  15. From my perspective, building a “visualization” system for blind people using hearing is one of the most effective way. And it is possible to visualize data set with all types of variables including nominal variables, ordinal variables and also numeric variables using sound. The reasons are that (1) For one thing, the hearing system is built inherently with some quantitative measurement like frequency-related scale. For example, human can only hear sound from 20 to 20,000 Hz on average. This is called hearing range. (http://en.wikipedia.org/wiki/Hearing_range). (2) For another thing, human can distinguish sound by loudness, pitches and timbre. So one way to visualize data by hearing is converting data into loudness, pitch scales and timbre scales.

    There is an example of teaching the blind to associate a series of sounds with different shapes. (http://www.telegraph.co.uk/news/newstopics/howaboutthat/3405867/Device-lets-blind-people-see-with-sounds.html) The blind volunteer is fitted with a device which uses a camera to take pictures of items in front of the wearer. The pictures are then analyzed by a computer that converts the angles and length of lines into a series of sounds of varying pitch, tone and volume. It is reported that after a lot of practice, people actually can visualize the things in front of them.

Leave a Reply

Fill in your details below or click an icon to log in:

You are commenting using your WordPress.com account. Log Out / Change )

You are commenting using your Twitter account. Log Out / Change )

You are commenting using your Facebook account. Log Out / Change )

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 436 other followers

Build a website with WordPress.com
%d bloggers like this: