On the Feasibility of a Universal Translator


one-earth-universal-translatorBeing a language geek, a techie, and a Star Trek fan, I felt that a post about the topic of universal translator when Star Trek is celebrating its 50th anniversary is only logical, even if it is not directly related to customer service or chatbots. And no, we are not going to talk about better machine translation. This technology is already a reality with a variety of approaches and new promising developments, even though it still needs a lot of polish. While not yet at the level of a human translation expert, machine translation is already usable in multiple scenarios. (Translation of known languages is, of course, also a part of the Star Trek universal translator, and on some occasions Star Trek linguists have to tweak the linguistic internals manually.)

This post will focus on the device’s decoding module for unknown languages, or decipherment.

Decipherment in Real Life

Just like the human ability to learn languages, decipherment is not new. No matter how elaborate, all techniques have the same core: pairing the unknown language with known bits. The classic Rosetta Stone story is the most famous case study: a tablet with inscriptions of Ancient Egyptian hieroglyphs, Ancient Greek, and another Egyptian script (Demotic) was used as a starting point to understand a long-dead language. Today, statistical machine translation engines are generated in a similar fashion, using parallel texts as “virtual Rosetta Stones”. If, however, a parallel text is not available, the decipherment relies on closely related languages or whatever cues can be applied. Perhaps the most dramatic (without exaggeration, movie-worthy) story of decipherment is that of the Maya script, which involved two opposing points of view amplified by the Cold War tensions. More recently, Regina Barzilay from MIT decoded a long-dead language using machine learning assuming similarity with a known language.

What happens when there is no Rosetta Stone or similar language? In face-to-face interaction, real world entities are used to build the vocabulary. It was done before by the seafarers exploring the New World, and occasionally done today by anthropologists and linguists like Daniel Everett who spent decades working with the Pirahã people in Amazon, studying their poorly documented language.

But what if the face-to-face interaction is not possible?

Life Imitates Fiction: Lingua Universalis

For decades, SETI researchers have been scanning the skies for signs of extraterrestrial intelligence. Some of them specifically focus on the questions, “what happens if we do get a signal?” and “how do we know if this is a signal and not just noise?”

The two most notable SETI people working on these issues are Laurance Doyle and John Elliott. Doyle’s work focuses on the application of Claude Shannon’s information theory to find out whether a communication system is similar to human communication in its complexity. Doyle, together with the famous animal behavior and communication researcher Brenda McCowan, analyzed various animal communication data comparing its information theory characteristics to those of human languages.

John Elliott’s work specifically focuses on unknown communication systems; the publication topics range from detecting whether the transmission is linguistic to assessing the structure of the language, and, lastly, on building what he calls a “post-detection decipherment matrix”. In Elliott’s own words, it would use a “corpus that represents the entire ‘Human Chorus’” applying unsupervised learning tools, and, in his later works, include other communication systems (e.g. animal communication). Elliott’s hypothetical system relies on an ontology of concepts with a “universal semantic metalanguage.” (Just like Swadesh lists compile a set of shared basic concepts.)

Interestingly, there are certain similarities between the fictional universal translator and the ways real-life scientists attack the problem. According to Captain Kirk’s explanation, “certain universal ideas and concepts” were “common to all intelligent life”, and the translator compares the frequencies of “brainwave patterns”, selects those ideas it recognized, and provides the necessary grammar. Assuming that a variety of hypothetical neural centers may produce recognizable activity patterns (brainwaves or not), and that communication produces a stimulus that activates specific areas in the neural center, the approach may have merit – provided the hardware sensitive enough to detect these fluctuations will be available. The frequency analysis is also in line with Zipf’s law which is mentioned throughout the work of Elliott and Doyle.

Other Star Trek series keep mentioning a vaguely described translation matrix, which is used to facilitate translation. Artistic license and technobabble aside, the word “matrix” and the sheer number of translation pair combinations correspond to a real-world interlingua model which employs an abstract language-independent representation of knowledge. (This is also not really new, as a similar structure was first proposed by the likes of Descartes and Leibniz with characteristica universalis.)

A few times, a certain linguacode has been mentioned. It is used as a last resort tool when the universal translator does not work. The linguacode may also have a real-world equivalent called lincos. Lincos, together with its derivatives, is a constructed language designed to communicate with other species using universal mathematical concepts.

View from the Engine Room

As someone who spent over a decade working on a language-neutral semantic engine today called Aspect NLU, I got very excited when I realized that the system and the ontology described by Elliott as a pre-requisite to the semantic analysis is very close to what I built. In fact, Elliott’s designs may be taken a few steps further.

Bundling all of the languages into a “human chorus” may steer the system towards a “one-size-fits-all” result which is too far from the target communication system. It does not have to be this way; with a system capable of mapping both syntactic structures and semantics (not just a limited set of entities), it is possible to build a “corpus of scenarios” which will allow for building more accurate ordered statistical models relying on the universality of interaction scenarios.

For example, most messages meant to be a part of a dialogue in most languages start with a greeting. Most technical documents contain numbers. All demands contain a request, and often a threat. News accounts refer to an event. Most long documents are divided into chapters and so have either numbers or chapter names between the chapters. Reference articles describe an entity.

The reasons for that have nothing to do with a structure of a particular language, and generally stem from the venerable principle of least effort or necessities for efficient communication in groups.

Using a system that runs on semantics will allow building a corpus that does not rely on the surface representation and instead records word senses, creating a purely semantic and a truly universal corpus.

Having syntactic structures semantically grouped (another unique quirk of ) opens up more possibilities. Greenberg’s linguistic universals would allow eliminating unlikely syntactic structures; novel machine learning approaches like deep learning that extract features may help find out additional universal constraints.

Instead of a Rosetta Stone, the system could serve as a high-tech “Rosetta Rubik’s Cube” with an immense number of combinations being run until the best matching combination is found.

Beyond Words

Is it possible to test the hypothetical “universal translator” software on something more accessible than a hypothetical communication from ETI?

Many researchers believe so. While it has not been proven that cetacean communication has all the characteristics of human language, there is evidence suggesting this is the case.

Dolphins use so-called individual signature whistles which appear to be equivalent to human names; among other things, the signature whistles are used to locate individuals (and therefore, meet one of the requirements for a communication system to be considered a language: displacement). In the course of Louis Herman’s experiments, dolphins managed to learn an adapted version of American Sign Language and understand abstract concepts like “right” or “left” (proving that they have a mental capacity for the actual human language, even in a limited scope). Lastly, the complex social life of dolphins requires coordination of activities which can be only achieved by efficient and equally complex communication.

In addition to the often cited cetaceans, there is evidence of other species having complex communication systems. A series of experiments has shown that ant communication may be infinitely productive (that is, have infinite amount of combinations like human language does) and that it may efficiently “compress” content (e.g. instead of saying “turn left, left, left, left” say “turn left four times”).

Both of the SETI researchers mentioned in the previous chapters, Doyle and Elliott, studied cetacean communication with various tools provided by information theory. Elliott calculated entropy for human language, bird song, dolphin communication, and non-linguistic sources like white noise or music. Communication systems share a “symmetric A-like amplitude” shape (more symmetric for humans and dolphins, less symmetric for birds). Dolphin communication also has other characteristics similar to human languages. Doyle conducted similar measurements with humpback whale vocalizations and arrived at similar conclusions.

This is why several animal communication initiatives are coordinated with the SETI initiatives. A truly universal decipherment framework would be incomplete without the ability to ingest and learn a complex animal communication system.

On the other hand, an advanced communication system may not be a natural language. Just like the real-world lincos mentioned above may contain special headers, a hypothetical human-machine hybrid creature may want to adopt a more efficient mode of communication, like compressed and/or encrypted messages (which differ from natural languages in their entropy values) and possibly some kind of rich content including visual and auditory fragments.

If we do not want our universal decipherment software to be limited to humanoids with cranial ridges, it has to account for the entire spectrum of possibilities being a bag of tricks rather than a single tool.


Just like many other sci-fi concepts, a universal translator is immensely difficult to implement, but there are reasons to believe it is possible. Just like with other sci-fi ideas, some of the elements of the fictional design may surface in real life.

Also, go watch Arrival when it’s out.


Vadim Berman

Vadim Berman is a director of engineering in Aspect Software, who came to Aspect with the acquisition of LinguaSys in 2015. Vadim co-founded LinguaSys in 2010, after spending several years building what became its core technology. Vadim recently moved to Massachusetts from Melbourne, Australia.

Latest posts by Vadim Berman (see all)