In the Beginning was the Word


In 2008, if you were a mobile phone user in Hong Kong or Switzerland, or perhaps on the messaging application ICQ, you might have received a message from a friend spelled out with icons like a baby bird in a nest or a dinosaur skeleton. We call these icons “emojis” now, and the use of an image to stand in for words has become quite commonplace. The difference, however, is that these icons created by Israeli company Zlango (later known as Lango) weren’t just meant to be used as fun inserts to supplement a comment, they were meant to replace text altogether.

Using abbreviations could help you fit your words into one text message, but Yoav Lorch, who started Zlango in 2004, discovered that they reduced text message lengths by only twenty percent. He hit upon the idea that by using icons, users could neatly shrink long messages into a compact size. Zlango was able to demonstrate this by translating fairy tales like Little Red Riding Hood into Zlango icons. While Charles Perrault’s version is 743 words in English, Zlango managed to fit the story into only ninety icons.

Aside from their ability to shorten text messages, Zlango’s icons were envisioned as a universal “visual language” that anyone could read regardless of language. But this isn’t the first time icons have been used to replace text. Years before Zlango’s fairy tales, Chinese artist Xu Bing started writing a book called Book from the Ground. Begun in 2003 about twenty-four hours in the life of a Mr. Black, the book is composed entirely of images and icons. Xu describes as “a book that anyone can read,” much like Lorch saw Zlango’s icons as “the language of the people.”

Lorch and Xu’s claims of universality are not entirely accurate: pictorial languages are still prone to misinterpretation, especially without context. If I told you that the icon of the dinosaur fossil means “old,” you might hazard a guess that the icon of the baby bird in the nest means “young.” In fact, it’s supposed to mean “want.” Even Xu’s publisher cautions that his book is readable only to people who have “experience in contemporary life” and who understand the “icons and logos of modernity.” You would have to be familiar with symbols referring to airports, stoplights, coffee machines, and taxis to provide context to Mr. Black’s life in a city.


A universally comprehensible written language based on images seems like a self-evidently good idea, and a fascination with pictorial languages gripped European thinkers during the 17th century in particular. Francis Bacon, Gottfried Leibniz, and John Locke were just a few of the prominent philosophers of the time who were dazzled by the idea of a language that could be read by everyone. Many of them were inspired by the Chinese writing system, mistakenly believing that each character corresponded solely to one object or one compact idea, much like a mathematical or scientific symbol.

Not only would it have practical purposes, such as use in international trade, many of these men believed also that a universal written language would help create world peace by eliminating misunderstandings between different cultures. This may seem idealistic or even naïve, but the idea of one language uniting the world in harmony was the same conviction that drove L. L. Zamenhof to create Esperanto, and more recently, Charles K. Bliss to create Blissymbolics, the invented language without a spoken component that is close to what someone like Liebniz or Bacon might have envisioned.

It shouldn’t come as a surprise that, like the 17th century thinkers, Bliss was inspired by both scientific symbols and written Chinese. A Jewish chemical engineer from the former Austro-Hungarian Empire, Bliss fled to Shanghai during World War II to escape the encroaching Nazis. After he and his wife were forced into the Hongkew ghetto during the Japanese occupation, Bliss became interested in the signs and notices written in Chinese characters around him. After learning to recognize a few simple characters like “man,” he found to his surprise that he was reading the characters in German. It didn’t matter that he couldn’t speak or write Chinese, he believed he could still access the meaning of the characters just by reading them. As Arika Okrent explains in In the Land of Invented Languages, Bliss was entranced by the possibility of bypassing language through pictographic symbols to get straight to the meaning.

But Blissymbolics predictably falls prey to the same problems that Zlango and Book from the Ground face: even symbols can be misinterpreted or misunderstood. Okrent describes how upset Bliss became when the symbols for “food” and “out” together were misinterpreted as “picnic” when he meant “food out at a restaurant.” But how could anyone have guessed the precise meaning?


Bliss never studied Chinese in depth, and if he had, perhaps Blissymbolics would have benefited from his discovery that around only one percent of written Chinese has a pictographic source, as William G. Boltz notes in an article for World Archeology. People literate in Chinese will be the first to say that reading Chinese is more complicated than just intuitively recognizing characters because they resemble what they’re supposed to represent. Unfortunately, the myth persists that reading Chinese is somehow different from reading an alphabetic language like English, with the assumption that a non-alphabetic language like Chinese must be processed by the right hemisphere of the brain instead of the left.

Research on dyslexics has often been cited to support this belief. For example, studies have found that dyslexics who use alphabetic writing systems have reduced activity in their left temporal lobe, while dyslexics who use Chinese have decreased activity in their left middle frontal region. It seems reasonable to conclude that this proves brain networks for reading are not the same for different language users.

However, cognitive neuroscientist Stanislas Dehaene, author of Reading in the Brain: the New Science of How We Read, argues that study authors are looking at the results the wrong way.

He notes that the same research also shows decreased brain activity in Chinese dyslexics at a region of the brain located less than the width of a fingertip away from the spot where Western-language dyslexics experience an anomaly – the same region normally associated with reading.

Looking at the results from this perspective, it is clear that there is a universal reading mechanism in the brain. Furthermore, Dehaene has a fascinating suggestion regarding the different brain anomaly locations: perhaps it’s a clue that we should be looking closer at the type of dyslexia that is being measured. “Phonological impairments are predominant in dyslexics who are taught an alphabetic writing system, while a form of ‘graphomotor’ dyslexia may prevail in Asian writing systems – even if the two subtypes exist in all countries.”

Further dispelling the right versus left hemispheres myth, a study published by the Haskins laboratory at Yale University early this year concludes that the same brain areas are activated during reading and speech, regardless of language.

The reading process may be complicated, but it is the same for everyone. Dehaene succinctly describes it thus: “Upon entering the retina, a word is split up into a myriad of fragments, as each part of the visual image is recognized by a distinct photoreceptor. Starting from this input, the real challenge consists in putting the pieces back together in order to decode what letters are present, to figure out the order in which they appear, and finally to identify the word.”

With written Chinese, characters, instead of being split into individual letters, are broken down into their individual morphemes – the smallest unit of meaning – and syllables. Unfortunately for all those 17th century philosophers and Charles Bliss, there is no direct “image to meaning” when you read Chinese.


We just can’t seem to bypass language when we read, even if we are reading icons and symbols. Language has to exist for us to be able to read them in the first place because we have no other way to capture meaning. You may be thinking, well, of course, for those of us who have grown up speaking before learning to read and write, it’s impossible to read symbols – or anything – without resorting to words.

But what if you’ve never been exposed to spoken language? What if you learned language solely through reading, like Tarzan does in Tarzan of the Apes? In the novel, the jungle-dwelling Tarzan, who is raised by apes, discovers a children’s book in an abandoned cabin and flips through it to look at the pictures, eventually realizing that the “little bugs” under each picture are words.

“And so he progressed very, very slowly, for it was a hard and laborious task which he had set himself without knowing it—a task which might seem to you or me impossible—learning to read without having the slightest knowledge of letters or written language, or the faintest idea that such things existed.”

Is it possible for a reader like Tarzan to exist? Our closest examples would have to be deaf children, most of whom learn to read without the help of a spoken language. However, it’s not encouraging to see that it has long been noted that deaf children who use English tend to have poorer reading abilities than their hearing counterparts.

Some researchers have speculated that to become readers, children must learn the mapping between the spoken language they already know and printed words on a page. For an alphabetic language like English, that mapping is based on sound, which automatically creates a big barrier for deaf children’s literacy.

Since Chinese’s logographic features mean that it relies less on phonological encoding and more on visual encoding, could we assume that the same problem wouldn’t exist among deaf Chinese children? Without spoken language and with the unique nature of Chinese characters, couldn’t deaf Chinese children go directly to the meaning of the words they read without being mediated by language? Wouldn’t they be fluent readers?

Unfortunately, that’s not quite the case. Dr. Jun Hui Yang, who studies deafness in China, has found that that deaf children who read in Chinese also lag behind hearing children in terms of reading ability and are more or less on the same level as their English-reading deaf peers.

For deaf children to read fluently, they must first be exposed to a language; it is not possible to learn a first language solely through print. Susan Goldin-Meadow and Rachel I. Mayberry write in their paper “How Do Profoundly Deaf Children Learn to Read?” that “knowing a language – even a manual language [like sign language] with a different structure from the language captured in print – is better for learning to read than not knowing any language.”

The real problem for Tarzan isn’t ignorance of written language; it is ignorance of how the sound of the words create meaning, in short, of language itself.


City dwellers read visual language every day. No parking. Stop. Loading zone. Children crossing. When you see these symbols, their meanings are so readily apparent that you might not even bother to convert them into a word as you walk or drive past.

This practical literacy was something our ancestors shared with us. Even though many of them could barely write their own names, they knew how to read symbols: a lord’s sigil, a traitor’s mark, an inn’s sign. Reading symbols allows you to go about your daily business without any need to second guess meaning. However, complex ideas invariably need words, even as they may be imprecise and unreliable, as Locke bemoans in An Essay on Human Understanding.

He condemns them as barriers to truth, but perhaps this very unreliability is the best part of reading in the first place. If Locke and his peers had succeeded in creating a universal language, we might have had direct access to their ideas, but would we have enjoyed Bacon’s sharp aphorisms and the earnest thoughtfulness of Locke’s writings? Our brain breaks down and creates meaning as we read words, and this process is also what allows us to smile at a clever turn of phrase or weep at a moving description of someone’s suffering.

Zlango’s Little Red Riding Hood managed to tell us the story of a young girl’s encounter with a wolf, it’s true, but is it the same without the quietly building menace behind the singsong charm of Perrault’s call-and-answer dialogue between Little Red Riding Hood and the Wolf?

“Grandmother, what big arms you have!”

“All the better to hug you with, my dear.”

“Grandmother, what big legs you have!”

“All the better to run with, my child.”

“Grandmother, what big ears you have!”

“All the better to hear with, my child.”

“Grandmother, what big eyes you have!”

“All the better to see with, my child.”

“Grandmother, what big teeth you have got!”

“All the better to eat you up with.”

We do not read simply to receive ideas dictated by a writer. We read to interact with them, to work at understanding them, to take pleasure from them. The visual languages of Charles Bliss and Zlango don’t offer these to us the same way words do, which is perhaps why Zlango closed shop in 2014, and Blissymbolics is mostly used by a Canadian rehabilitation center for disabled children to facilitate their English-language learning.

Bliss died in 1985, disappointed and hurt that Blissymbolics never achieved the recognition and widespread use that he believed it deserved. As for Lorch, since leaving Zlango, he has founded a new venture called Total Boox. It’s a “pay as you go” e-book app that this author of six German-language books says gives everyone the freedom to just “go ahead and read.”



Reprinted from Mi Lu magazine

Leave a Reply