Why do I hate to answer and express?

Questions and answers

At what age should children be able to say simple sentences?

Children differ in their learning strategies and in the course of their development. That is why the age at which certain stages of development are reached (e.g. “at one year old children speak their first words”) can only be viewed as a general average. So it is completely normal for children to reach certain levels a few months earlier or later. With this knowledge we can understand the general course of development on the way to the first simple sentences.

The first simple sentences are spoken by children around the age of two and a half to three years. That might sound late when you consider that children are already speaking their first words at the age of 12 months. But before they can put these words together into sentences, children need to acquire some basic knowledge of the grammar of their language. We often think of nouns and verbs as the basic building blocks for sentences ("cat" + "want" + "milk" = "the cat wants some milk"). But even very simple sentences require other words (“die”, “something”) and inflected forms (“want” → “want”) to be used. Learning such functional words and inflections is not easy because, in contrast to content words such as “cat” and “milk”, children cannot or experience the meanings of “die” and “want”. What complicates the endeavor is the fact that the words have to be put in the correct order and pronounced aloud with a sentence melody and intonation that corresponds to the intended meaning (compare “The CAT wants some milk.” With “The cat want some MILK. "). All of this is very difficult for young children, even with very simple sentences.

But even if children under the age of two and a half do not say full sentences, they are already learning to combine words in other ways. For example, many children already start words with gestures after their first birthday, such as pointing at something and nodding. The combination of a word and a gesture, such as "milk" and nod, says more than just the word or just the gesture alone. And some studies show that these early combinations have a link to further development of two-word utterances a few months later.

From around the age of 18, children begin two-word expressions such as “Bear. Suitcase. "To express sentence-like meanings (such as" The bear is in the suitcase. "). Initially, these two words sound like two separate utterances with no connection. But with a lot of practice, the pauses between words become shorter and the melodies of the individual words begin to combine.

At the age of two, when children have fully reached the “two-word phase”, they often produce sequences of two to three words that sound like normal sentences, except that most of the functional words and inflections are missing (“Da Katzen!”). At this stage in their development, some children even put such words in a constant sequence. For example, some children use so-called keywords. pivot words) always in first position (“more apple”, “apple here”), other children always in second position (“more apple”, “apple here”). With shorter pauses, a consistent order of words and a melody that spans all words, the utterances of children have more and more characteristics of a single planning unit: a sentence.

But even when children say the first simple sentences when they are two and a half to three years old, they still have a lot to learn. Their knowledge of the possible combinations of words is not yet comparable to that of adults. Instead, they usually limit themselves to what they hear most often. Using the correct inflections can remain difficult for a long time, and it's not uncommon for children to continue making mistakes while learning which inflections are regular (brush, brush, brush, brushed) and which are irregular (be, am , is was). In the next two years after the first simple sentences, children make great strides in the range of their vocabulary and the inflections they use. Their sentences get more complicated as they get older, and by the time they are four to five years old they have learned most of what they need to be able to communicate fluently with others.

If you would like to find out more about language development, take a look at the website of the Nijmegen Baby & Child Research Center.

By: Marisa Casillas & Elma Hilbrink
Translated from English by: Matthias Barthel & Sebastian Sauppe

Literature:

Clark, E.V. (2009). “Part II: Constructions and meanings”. In First language acquisition (pp. 149-278). Cambridge University Press.

Iverson, J. M., & Goldin-Meadow, S. (2005). Gesture paves the way for language development. Psychological science, 16(5), 367-371.

Language and programming in the brain

Siegmund and colleagues (2014) were the first to empirically investigate the connection between programming and other cognitive domains such as language processing - at least with brain tomography. They used functional magnetic resonance imaging (fMRI), which measures changes in local blood oxygen saturation as a function of brain activity in various networks of neurons. The brains of computer science students were observed while they were lying in the tomograph and had to read and understand code snippets and while they read similar code and looked for syntactic errors but had to understand nothing else. Activity was found in the classic language networks: Broca's and Wernicke's area, especially in the left hemisphere.

The code to be read was laid out in such a way that it reads the so-called bottom-up- Promote understanding, i.e. reading and understanding phrase by phrase and line by line. Participants should not just take a cursory look at the overall structure of the code. The bottom-upProcess can be compared to processes in language processing, during which individual words are combined according to syntactic rules to generate a coherent sentence, and these sentences are then linked to form a conversation. There is also evidence that linguistically gifted people are also better software developers (Dijkstra 1982). A mechanistic explanation for this could be the strength of the connections between the mentioned brain regions, which differ from person to person. In summary, it can be said that the same networks in the brain are used for programming and language.

By: Julia Udden, Harald Hammarström and Rick Janssen

Translated from English by Tayo Takada & Sebastian Sauppe

Siegmund, J., Kästner, C., Apel, S., Parnin, C., Bethmann, A., Leich, T., Saake, G. & Brechmann, A. (2014) Understanding Understanding Source Code with Functional Magnetic Resonance Imaging. In Proceedings of the ACM / IEEE International Conference on Software Engineering (ICSE).

Dijkstra. How Do We Tell Truths that Might Hurt? In Selected Writings on Computing: A Personal Perspective, 129-131. Springer, 1982.

What do learning a natural language and learning a programming language have in common?

Programming languages ​​are usually taught to teenagers or adults - similar to foreign languages. This type of learning is called "explicit learning". In contrast, we all learned our mother tongue implicitly in early childhood. Children do not need to be taught explicitly how to use language, they learn through observation and practice. A basic requirement that makes this kind of learning possible in the first place is the interactive nature of language: people ask questions and answer them, communicate when they do not understand something and negotiate until they understand it (Levinson 2014). Programming languages, on the other hand, are passive: they execute instructions and spit out error messages, but they don't find code interesting or boring, and they don't ask questions.

This is why it is sometimes difficult to think in a programming language, i.e. to formulate instructions comprehensively and completely unambiguously. The good news is that many modern programming languages ​​use the same concepts and structures because they are all based on the same principles. This means that it is often relatively easy to learn a second programming language if you already have a good command of another. Learning a foreign language often means a lot more effort. One thing is clear, however: it is becoming increasingly important to master both types of language.

further reading

The children who learned to use computers without teachers

Levinson, S. C. (2014). Pragmatics as the origin of recursion. In F. Lowenthal, & L. Lefebvre (Eds.), Language and recursion (pp. 3-13). Berlin: Springer.

By: Julia Udden, Harald Hammarström and Rick Janssen

Translated from English by Tayo Takada & Sebastian Sauppe

What do programming languages ​​have in common with natural languages?

In fact, some programming languages ​​read almost like simple English sentences. For example, here is a small program that is in the programming language python is written. It searches a list of names and ejects any name that also appears in the "invited_people" list:

python

for name in my_list: if name in invited_people: print name


Other programming languages, on the other hand, are far less readable. Here is the same program again in the programming language Scheme:

Scheme

(map (lambda (name) (if (cond ((member name invited_people) name)) (display name) name)) my_list)


How similar or different are natural languages ​​and programming languages? To answer this question, we first need to talk about a few key terms that linguists use to describe the structure of natural languages. Otherwise we run the risk of being caught up in superficial similarities.

Structural similarities with semantics and syntax

Two of the most important terms in linguistics are semantics and syntax. Simply put, semantics is the linguistic technical term for "meaning". More precisely, semantics contain all information that is associated with a concept. The concept "sleep" (the form of which is either spelled or pronounced [ˈʃla: fn̩]) denotes an action of a living being and that is the semantic of this word.

Syntax, on the other hand, is the study of how words of different kinds (e.g. nouns and verbs) can be combined with one another. The sentence "Meine Ideen Schlaf" is a well-formed German sentence from a syntax point of view, but the semantics are questionable because ideas are not alive and therefore cannot sleep. So semantics and syntax both follow the laws that state that some concepts fit together and others don't. The difference is that semantics relates to the meaning and syntax to the part of speech and word form (i.e. different inflections of the same term as you looking for, the Search, bysearch) and how these word forms can be combined.

Now we know something about semantics and syntax in natural languages. What about programming languages? The programmer begins with an intention of what his code should do. That could be called the semantics or meaning of the code. The syntax of the programming language connects individual code snippets with one another. What words are in natural language are variables, functions, indices, different types of brackets, etc. in programming language. The two code examples python and Scheme have the same meaning, but the syntax is different (just as the syntax of natural languages ​​such as German and Japanese differs).

Different goals

We noticed some parallels in the basic structure of natural languages ​​and programming languages. But how far does this analogy really get you? The shape of natural languages ​​is not arbitrary. It is determined by the anatomical features of the person (tongue, vocal cords, etc. on the one hand and brain on the other) and by the fact that it must be suitable for communication. Programming languages, on the other hand, are designed to meet the requirements of a so-called Turing machine, i.e. to be able to perform all the calculations that a person can perform with a pad and pencil over and over again.

Programming languages ​​are necessarily self-contained. Natural languages, on the other hand, are constantly changing and allow cross-fades (e.g. Democrature, Denglish, yes and no, etc.). Programming languages ​​make it possible to read in long lists of data, to save them and to process them quickly in many individual steps in order to ultimately deliver an output or a final result. The important thing is that it always happens the same way. Natural languages, on the other hand, must provide their speakers with opportunities to greet each other, make promises, but also sometimes remain vague or lie. New terms and syntactic structures keep appearing and disappearing again, and the meanings of existing words are constantly changing. A sentence in a spoken language can have different meanings. For example, the phrase "I saw the dog with the telescope" may mean that I saw a dog through my telescope or that I saw a dog who had a telescope. People use context and world knowledge to choose between these possible interpretations. The basis of natural languages ​​is therefore a constantly changing culture, be it by mixing existing culture (s) or by creating new culture. Programming languages ​​are not nearly as flexible. In programming languages, a line of code must have a single meaning so that the output can be reproduced with certainty.

By: Julia Udden, Harald Hammarström and Rick Janssen

Translated from English by Tayo Takada & Sebastian Sauppe

Do gender-indicating articles influence general trains of thought?

Languages ​​organize their nouns (also called nouns) in different ways. Some have no classification of the nouns at all (e.g. English: every noun can be replaced by "it"), some have two classes (e.g. French: every noun is either male or female), some even have 16 ( e.g. Swahili: different classes for animate things, inanimate things, tools, fruits, ...). Some languages ​​with two noun classes differentiate between male and female (e.g. French), some between 'common' (male and female) and neutral (e.g. Dutch). All of these languages ​​obviously differ at the level of their so-called 'grammatical gender system'. While Western European languages ​​may create the impression that grammatical genders mainly influence the articles in front of nouns (eg “der”, “die”, or “das” in German), the nouns themselves, as well as other words that come with them in Connected, influenced. Polish, for example, has no articles at all (as if the word “the” didn't exist in English), and yet it has a complex gender system that aligns adjectives with nouns. The reasons for these differences between languages ​​remain a mystery.

Since the gender system of a language permeates all sentences, one can ask oneself whether it also has an influence on the thought processes of language users in general. At first glance, that seems unlikely. A gender grammatical system is just a set of rules used to determine how words must be changed when they are combined. There is no 'deeper' meaning behind these rules. And yet a number of experiments have produced surprising results.

In the 1980s, Alexander Guiora and his colleagues noticed that two- to three-year-old Hebrew-speaking children (whose language distinguishes between male and female nouns) are about half a year ahead of their English-speaking peers in terms of developing their gender identity. It is as if the distinction between Hebrew nouns was an indication to these children that similar gender differences also occur in the natural environment.

Adults also seem to use grammatical genders, even if it does not bring them any benefit. Roberto Cubelli and his colleagues asked test subjects to decide whether two objects belonged to the same category (e.g. tools, furniture, etc.) or not. When the genders of the objects matched, decisions were made faster than when they did not match. The task did not include naming the objects. Even so, the study participants appear to have used the arbitrary gender system of their mother tongue.

Edward Segel and Lera Boroditsky found the influence of grammatical genders even outside the laboratory: in a lexicon of classical painting.They examined all gender representations of actually asexual terms, such as B. Love, Justice and Time. It struck them that these things were personified by masculine beings when the grammatical gender in the painter's language was also masculine (e.g. French: 'le temps ‘) and vice versa for female characters (e.g. German:‚the Time'). The gender represented matched 78% grammatical gender when the painter's language was 'gender', such as Italian, French and German. On top of that, the effect was still present if only the terms that different genders had in the examined languages ​​were taken into account.

These and similar studies have clearly shown to what extent the classification system for nouns has an impact on the general perception of language users. Forcing people to think in certain categories also affects general thinking habits. This shows quite clearly that thoughts are influenced by the things you say got to, not those that you say can. The effect of grammatical genders on general trains of thought shows that language is not an isolated skill, but is at the center of many trains of thought.

Written by Richard Kunert and Gwilym Lockwood

Translated by Richard Kunert

More reading:

Segel, E., & Boroditsky, L. (2011). Grammar in art. Frontiers in Psychology, 1.1. doi: 10.3389 / fpsyg.2010.00244

Is it inevitable that speaking a foreign language regularly affects our own mother tongue?

Many people who learn a foreign language or who are in frequent contact with people who are learning a foreign language notice that the way we speak the foreign language is strongly influenced by our mother tongue. Often a characteristic, non-native accent is recognizable and it happens that the speakers use words or incorrect grammatical structures in the foreign language that they adopt from their native language. However, a less well-known phenomenon is that speaking a foreign language affects one's mother tongue.

People who start speaking a foreign language regularly (for example after moving abroad) find that they sometimes have difficulty remembering words in their mother tongue. It often happens that these people "borrow" words from the foreign language when speaking their mother tongue. For example, Dutch people who speak English frequently use English words when speaking their mother tongue, for which there is no direct translation in Dutch (e.g. the word native). They also sometimes use literal translations of English idioms, which are formed differently in Dutch (e.g. take a photo means literally translated take a photo, however means take a photo).

Experimental studies in the last few decades have shown that these influences can be recognized on various linguistic levels. This means that both the choice of words (the mother tongue lexicon), the sentence structure (the syntax) and the pronunciation of the mother tongue of the foreign language can be influenced. The research suggests that all languages ​​we master are automatically co-activated when we speak. This means that if, for example, a Dutch person speaks German, both his German and Dutch, as well as all other languages ​​that person speaks, are activated at the same time. This co-activation is a very likely cause of the cross-linguistic influences described above.

Does that mean that your own mother tongue is inevitably influenced by learning a new language on all linguistic levels? To a certain, perhaps not always obvious, degree. However, there are great individual differences from speaker to speaker. The influence of the foreign language on the mother tongue increases with the dominance of the foreign language in daily use and especially when you interact frequently with native speakers (e.g. when you have moved abroad). Furthermore, the influence increases over time. It is more likely that people who emigrate abroad will show such effects after 20 years (compared to only two years abroad), although of course regular use of the foreign language, especially at the beginning, also has a great influence. Some researchers have suggested that individual differences between people depend heavily on cognitive abilities, such as how well a person can suppress irrelevant information. According to this view, someone who is better at suppressing irrelevant (e.g. visual or auditory) information should be able to minimize the influence of the foreign language on the mother tongue. It must be emphasized that some of these influences are very subtle and may not even be noticeable in a normal conversation.

Written by Shiri Lev-Ari and Hans Rutger Bosker

Translated by Florian Hintz and Cornelia Moers

Further reading:

Cook, V. (Ed.). (2003). Effects of the second language on the first. Clevedon: Multilingual Matters.

Why can multilingual people sometimes only speak one of their two languages ​​after a stroke?

In the event of a stroke, the blood supply to certain areas of the brain is temporarily disrupted. This happens either through a cerebral haemorrhage or a blockage of a blood vessel. If the stroke affects areas of the brain that play an important role in language, language functions can be partially or even completely lost. This is called aphasia. With plenty of time, proper treatment, and rehabilitation, aphasia can in some cases be cured, at least to some extent. People who speak more than one language, i.e. bilingual or multilingual people, can recover from a stroke in different ways. The most likely case is that the patient regains language proficiency in both languages ​​in a similar way (parallel aphasia). In some cases, however, one of the languages ​​is recovered disproportionately quickly and better than the other language (s) (selective aphasia).

The awareness of cases of selective aphasia in multilingual patients led researchers to initially assume that different languages ​​are stored in different areas of the brain. However, thanks to imaging techniques in neuroscience, we now know that this is not the case. On the contrary, even if a person speaks many different languages, the same brain regions are always activated. Unfortunately, while we still do not fully understand how our brains are able to speak multiple languages, we already know some factors that seem to affect the degree to which a multilingual patient can speak will regain. For example, if a language is less well mastered, it is likely that it will not be recovered as well as the language that has been mastered better. This means the more automatic an ability is, the easier it is to regain it after a stroke. Skills that take a lot of effort, such as speaking a language that you rarely use, are much more difficult to regain. Social factors and emotional connectedness also play an important role in understanding what language is recovered after a stroke. It depends, for example, on how often a language is used or what feelings we associate with a language. While we know that these factors play an important role, we still have to investigate exactly how the factors are related in order to be able to successfully predict the process of language recovery.

One of the current theories ascribes selective aphasia back to the fact that, during a stroke, areas that are responsible for certain control mechanisms are damaged. Multilingual people have to suppress or 'switch off' the other when using one of their languages. If these switching mechanisms are damaged during a stroke, the patient may no longer be able to regain both languages ​​equally because the ability to control language has been lost. In such a case it looks like one of the languages ​​is completely lost, but the problem lies in the control mechanisms. Recently, researchers found evidence that control mechanisms are more impaired in bilingual people with selective aphasia than in those with parallel aphasia. Interestingly, when language is regained after a stroke, neural connections between language and control mechanisms are restored. Even if this interesting finding speaks in favor of the theory of a connection between selective aphasia and control mechanisms, this is only one of many explanatory models. Research is currently working to gain a better understanding of the various factors that may still underlie the unusual healing processes of multilingual aphasia after stroke.

Diana Dimitrova and Annika Hulten

Translated by Franziska Hartung & Louise Schubotz

Fabbro, F. (2001). The bilingual brain: Bilingual aphasia. Brain and Language, 79(2), 201-210. pdf

Green, D. W., & Abutalebi, J. (2008). Understanding the link between bilingual aphasia and language control. Journal of Neurolinguistics, 21(6), 558-576.

Verreyt, N. (2013). The underlying mechanism of selective and differential recovery in bilingual aphasia. Department of Experimental psychology, Ghent, Belgium. pdf

Why do some languages ​​have a writing system that accurately reflects how the language is spoken, while other languages ​​have a less clear writing system?

No language has a spelling system (orthography) that reproduces the sound of words absolutely and completely, but some are definitely better than others. Italian, for example, has a shallow spelling. This means that the spelling of the words represents the sounds of Italian quite well (although Sicilian, Sardinian, and Neapolitan speakers might disagree here). English, on the other hand, has a deep spelling, which means that the spelling and pronunciation match less well.

Below we want to explain the two main reasons why Italian is relatively consistent. First, the Accademia della Crusca has been regulating the Italian language since it was founded in 1583. Since then, it has spent several centuries establishing a comprehensive and effective consistency in Italian spelling. Second, normal Italian has only five vowels: a, i, u ,, e and o. This theoretically makes it much easier to distinguish between the different vowels. Other examples of languages ​​with five vowels are Spanish and Japanese, both of which also have shallow spelling.

Japanese is an interesting case. Some words are written using Japanese characters, which accurately represent the sound of the words. Other words, on the other hand, are written with adapted Chinese characters that represent the meaning of the words but not the tone.
French has a deep spelling that is only one-sided. While in French a tone can be written in different ways, there is usually only one particular way of pronouncing a particular vowel or combination of vowels. For example, the sound [o] can be like au, eau, or O be written (such as in skin, oiseau, and mot). The written word eau however, it is clearly pronounced as [o] in French.

English now has a very deep orthography and has successfully resisted possible spelling reform for centuries (interestingly, this is not the case in the US; the Noah Webster American Dictionary of the English Language has successfully established a modernized spelling reform). One obvious reason is the lack of an academy for the English language. Other reasons for this are that English evolved from a mixture of many European languages: a blob of Latin and Greek here, a pinch of French and Celtic there, a few bits of German, and a handful of the Nordic language - English has a long and complicated history . Some spelling irregularities in English reflect the original etymology of the words. The English words send and sell come z. B. originally from Germanic. Here the spoken, 'se' is actually like se- written. The pronunciation of the ce- as "se" in center, certain and celebrity was in turn influenced by French. The unspoken b in doubt and debt goes back to Latin roots (e.g. dubitare and debitum).

All languages ​​change over time. However, the English language underwent a number of profound changes in the sounds of its vowels in the Middle Ages; also known as the Early New English vowel shift (engl: the Great Vowel Shift ). The early and middle phases of this movement coincided with the invention of the printing press, which helped freeze English spelling at that point. As a result, the pronunciation has evolved and changed; the spelling, however, always remained the same. This means that in today's English many words are still spelled as they were pronounced 500 years ago. Shakespeare's plays were originally spoken very differently than they are today, but the spelling is still almost exactly the same. In addition, it is much more difficult to adapt the pronunciation of the spelling in English because the number of vowels is simply very large. Depending on the dialect, English speakers produce up to 22 individual vowel sounds, but only through the letters a, i, u, e, o and y be represented. So it's no wonder that so many competing letter combinations have been created.
Deep orthography makes it more difficult to learn to read a language; for both native speakers and foreign language learners. Spelling reforms can make this easier. Even so, many people oppose spelling reforms because the benefits cannot outweigh the loss of language history. The English, for example, love regularity when it comes to queues and tea - but not when it comes to spelling.

Gwilym Lockwood & Flora Vanlangendonck

Translated from English by Katrin Bangel & Kai Wanke

See also:
Original pronunciation in Shakespeare: http://www.youtube.com/watch?v=gPlpphT7n9s

How, in what order and why do people develop the various skills necessary for language acquisition?

Children usually start babbling at the age of two or three months - at the beginning they produce vowels, later consonants and finally word-like noises at seven to eleven months. Children babble to discover how their speech apparatus works, how they can produce different sounds. With the production of word-like sounds comes the ability to extract words from a given speech input. These are important steps in a child's first few words. This is usually produced by a child at the age of 12 months.

Simple one-word utterances follow two-word utterances in the second half of the child's second year, in which grammar can already be recognized. Children who grow up with German or Dutch (subject-object-verb sentence order in subordinate clauses, which have a more consistent word order than main clauses) or English (subject-verb-object sentence order) produce their two-word sentences in subject-verb order , for example “I eat”, while children learning Arabic or Irish (languages ​​with a verb-subject-object sentence order) produce sentences like “I eat”. From then on, a rapid increase in the child's vocabulary can be observed, while the sentences produced by the child become longer and more complex. It is said that grammar is fully developed by the age of four to five. At this age, children are essentially linguistic adults. The age at which children develop these skills can vary widely from one child to another, and the order of development also depends on the linguistic environment in which the child grows up. But by the age of four or five, all healthy children have acquired language. The acquisition of language correlates with various processes in the brain, such as the formation of connective nerve tracts, the increase in metabolic activity in different brain regions and marrow maturation (the production of myelin sheaths, which form a layer around the axon of a neuron and essential for correct function of the nervous system).

By Mariella Paul and Antje Meyer

Translated from English by Mariella Paul

Additional Information:

Bates E, Thal D, Finlay BL, Clancy, B (1999) Early Language Development and its Neural Correlates, in I. Rapin & S.Segalowitz (Eds.), Handbook of Neuropsychology, Vol. 6, Child Neurology (2nd edition). Amsterdam: Elsevier.

What are homophones and why do they exist?

Homophones are words that sound the same but have at least two different meanings. This phenomenon occurs in all languages. An example from English are the words FLOWER (flower) and FLOUR (flour), while in German, for example, LEERE and LEHRE are homophones. Although these word pairs are pronounced the same, they differ in their meaning and spelling (they are therefore also called heterographic homophones). But there are also homophones that sound the same and are spelled the same, for example the words BANK (seating) and BANK (financial institution) in German or English. Such words are called homographic homphones. Words with the same or similar sound, but different meanings, also exist across language boundaries: for example, the word WIE in German means 'wie' (question word of manner), in Dutch it means 'who' (person question word).

One might think that homophones cause problems for the recipient or listener. Because how should a listener know what a speaker means when this "I hate the mouse!" says. In fact, scientific studies show that listeners need more time to process ambiguous words than unambiguous words. In most cases, however, the context helps us to identify the right meaning. For example, the above sentence could appear in the following contexts: "I have nothing against my daughter's animals, but I hate the mouse" or "I love my new computer, but I hate the mouse". Usually, listeners filter out the intended meaning so quickly that they don't even notice the possible ambiguity. The previous linguistic context as well as our general knowledge of the world help us to recognize the meaning of the words intended by the speaker.

Then why do homophones actually exist? It would be much less confusing to use different combinations of sounds to express different concepts. Linguists assume that sound changes in the course of constant language change lead to the existence of homophones. For example, the first letter of the English word KNIGHT (knight) became in the early 18. Century no longer pronounced, and so led to the Homphonie with the word NIGHT (night). Contact between languages ​​can also produce homophones. The English word DATE (meeting) was recently adopted into Dutch and now forms a homophone with the word DEED (has done). Homophones can arise from sound changes over time, but they can also disappear with the change in language. The Dutch verb ZOUDT (would), for example, is rarely used today. As a result, the noun ZOUT (salt) loses its homophonic character.

A particularly nice property of homophones is that they often appear in word games and are used as stylistic devices in literary texts. In Shakespeare Romeo and Juliet For example, (Act I, Scene IV, lines 13-16) Romeo uses a homophone while rejecting Mercutio's suggestion to dance:

Mercutio:Nay, gentle Romeo, we must have you dance.
Romeo:Not I, believe me: you have dancing shoes
With nimble soles: I have a soul of lead
So stakes me to the ground I cannot move.


This elegant use of homophones has led, among other things, to Shakespeare's great literary success.

By David Peeters and Antje S. Meyer

Translated from English by Cornelia Moers & Thordis Neger

Additional Information:

Bloomfield, L. (1933). Language. New York: Henry Holt and Company.

Cutler, A., & Van Donselaar, W. (2001). Voornaam is not (really) a homophone: Lexical prosody and lexical access in Dutch. Language and speech, 44(2), 171-195. (link)

Rodd, J., Gaskell, G., & Marslen-Wilson, W. (2002). Making sense of semantic ambiguity: Semantic competition in lexical access. Journal of Memory and Language, 46(2), 245-266. (link)

Tabossi, P. (1988). Accessing lexical ambiguity in different types of sentential contexts. Journal of Memory and Language, 27(3), 324-340. (link)

How does dyslexia develop?

Dyslexia is suspected when children, despite their normal intelligence, have learning, reading or spelling difficulties without the sensory organs being impaired. Dyslexia was first described as a developmental disorder in 1890. It was believed that the cause of the disturbance was an impairment in the processing of visual symbols. This gave rise to the term 'congenital word blindness'(i.e. innate word blindness). However, it was later found out that in most cases it was not visual deficits that were responsible for dyslexia, but rather subtle language difficulties. When learning to read, a child must understand how words are put together from individual units (phonemes) and learn to link these phonemes with symbols (letters). Although general speech ability is normal in most cases in people with dyslexia, they show difficulty in tests that test for sound alteration and sound processing; even if the test does not require reading or writing.

Dyslexia is defined as a reading disorder with no apparent cause. It is therefore conceivable that the term 'dyslexia' encompasses not just a single syndrome, but a network of different disorders that are based on different mechanisms. However, it is difficult to distinguish dyslexia into subtypes. Various studies have already shown significant connections between reading problems and other behavioral patterns. For example, many people with dyslexia are less precise when asked to name quick sequences of objects or colors. Some researchers believe that dyslexia occurs as a result of multiple cognitive deficits. It is currently still being discussed how the different behavior patterns can be summarized in a coherent theory.

It is known that dyslexia is hereditary and that genetic factors contribute significantly to susceptibility to the disease. However, the genetic component is complex and very heterogeneous. This means that several different genes interact with environmental factors and thus contribute to the development of the disease. Researchers have already identified a number of interesting potential genes, such as DYX1C1, KIAA0319, DCDC2, and ROBO1. Rapid advances in DNA sequencing technology offer great potential for the discovery of more genes in the years to come. The neurobiological mechanisms associated with dyslexia are still largely unknown. A well-known theory describes disturbances in a process in the early development of the brain as the cause. At the stage of development in which the disorder is suspected to occur, the brain cells of the fetus move to their final position in the brain. This process is called neural migration. Indirect evidence for this hypothesis comes from studies of human brain cells from the deceased and from studies of the functions of some possible genes in rats. There are still many unanswered questions that must be answered before we fully understand the causal mechanisms underlying this elusive syndrome.

by Simon Fisher
Translated from English by Katrin Bangel & Franziska Hartung

Suggestions for further reading:

Carrion-Castillo, A., Franke, B., & Fisher, S. E. (2013). Molecular genetics of dyslexia: an overview. Dyslexia, 19, 214-240. (link)

Demonet J.F., Taylor M.J., & Chaix, Y. (2004). Developmental dyslexia. Lancet, 63, 1451-1460 (link)

Fisher, S. E. & Francks, C. (2006). Genes, cognition and dyslexia: learning to read the genome. Trends in Cognitive Science, 10, 250-257. (Link)

Do two people with different sign languages ​​understand the other person's signs?

When people outside of language research hear that we are investigating sign language or they observe people using sign to communicate, they often ask whether sign language is universal. The answer to this question is that almost every country has at least one national sign language. In addition, sign language is often very different from the dominant spoken language in the country of origin. British and American signs, for example, look very different. Chinese and Dutch sign languages ​​differ not only in vocabulary, but also in finger spelling, and both continue to have their own grammatical rules. Amazingly, deaf Chinese and deaf Dutch, who cannot access a common language, are nevertheless able to communicate relatively easily.


This type of ad-hoc communication is also known as cross-signing. In cooperation with the 'International Institute for Sign Languages ​​and Deaf Studies' (iSLanDS), we investigate exactly how cross-signing occurs between people from different countries of origin when they meet for the first time. To do this, we analyze video recordings of signing people from South Korea, Uzbekistan or Indonesia, for example. Preliminary results show that the communicative success of such a conversation depends on how well the communicating people can use and understand creative signs (which are not part of their own language). This linguistic creativity is often characterized by very graphic properties of the gestures (e.g. the representation of a circle in the air representing a round object, or the reference to a man through the suggestion of a mustache). Furthermore, general principles of human interaction also play a role in creativity, such as repeating a gesture as a request for more information.

Cross-signing is different from international sign language. The latter is used, for example, at international meetings such as the Congress of the World Federation of People with Deafness or the 'Deaflympics'. The international sign language is strongly influenced by signs from the American sign language and is used in presentations to an international deaf audience who is familiar with the vocabulary. Cross-signing, on the other hand, occurs when two sign language users meet who are not familiar with the other person's native language.

Connie de Vos, Kang-Suk Byun & Elizabeth Manrique
Translated from English by Florian Hintz & Katrin Bangel

Further reading:

Information on similarities and differences between different sign languages, and between spoken and signed language (written by the World Federation of People with Deafness) (link)

Mesch, J. (2010). Perspectives on the Concept and Definition of International Sign. World Federation of the Deaf. (link)

Supalla, T., & Webb, R. (1995). The grammar of International Sign: A new look at pidgin languages. In K. Emory and J. Reilly (Eds.), Sign, gesture and space. (pp.333-352) Mahwah, NJ: Lawrence Erlbaum.

Is there a universal body language?

Most of the time we also communicate with our body and in very different ways. One form of communication is e.g. body language. During social interaction, our attitudes and emotions are also expressed through our body. The dynamics of the interaction, the inter-human relationship with the person you are speaking to and your own personality play a role (see also the answer to the question What is body language). The signals that our body sends out are mostly unconscious. It would therefore be difficult to find a universal body language that can be understood, learned and used by people from the most varied of cultures. Within a culture, however, there are many similarities in how people express their attitudes and emotions through their bodies.

Another form of communication through our body is the use of speech-accompanying gestures. These gestures, which are produced when speaking, are movements of the hands, arms and occasionally other parts of the body. Compared to signals that are expressed through body language, speech-accompanying gestures represent meanings that are very similar to the meanings of what is said. We hit z. B. with the fist in the open palm of the other hand when we speak of an impact. However, it is seldom possible to interpret gestures that accompany language completely without the associated language, because language and gestures are always very closely linked. If the interlocutors speak different languages, then language-accompanying gestures of communication are of little help. In addition, gestures accompanying language arise and change within a culture. Accordingly, gestures differ to some extent between different cultures. Furthermore, even within a culture, there is no standard form for these gestures. They are created individually by each person speaking. Although there are similarities in the use of gestures to convey certain meanings, there are also significant differences from person to person. For communication between people who speak different languages, language-accompanying gestures are accordingly not suitable as universally applicable means.

A way out that people often choose when trying to communicate in another language is pantomime gestures or simply pantomime. These gestures are very iconic, so they represent certain structures or objects from our environment (like a few iconic language-accompanying gestures). Even if they were primarily produced together with language, these gestures can also be understood without language. Thus, when communicating in a foreign language, these are much more informative than language-accompanying gestures. As long as we share knowledge about our world with our counterpart (e.g. about certain actions, objects and their spatial relationships), these gestures are communicative - even if we do not speak the same language.

Pantomimic gestures, which can also transmit information without language, must, however, be distinguished from sign languages. Compared to pantomimes, sign languages ​​of deaf communities are full-fledged languages. They consist of shapes and movements that have conventional meanings and correspond to units of spoken language. Here, too, there is no universal sign language: Different communities have different sign languages ​​(German, Dutch, English, French or Turkish sign languages ​​are just a small selection).

Judith Holler & David Peeters
Translated from English by Katrin Bangel & Manu Schütze

Continue reading?

Kendon, A. (2004). Gesture: Visible action as utterance. Cambridge University Press. (link)

Press. McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago University Press.

Is there a language gene that other species don't have?

Language is unique to our natural world and it is an integral part of what makes humans human. Although other species have complex communication systems of their own, even our closest living primates cannot speak, in part because they cannot adequately control their vocalizations. Some chimpanzees and bonobos were able to learn rudimentary sign language after years of intensive training, but even these exceptional cases are nothing compared to a normally developing human toddler who can use language spontaneously and innovatively to express thoughts and ideas about the present, past and future .

Genes certainly play an important role in solving this mystery. However, there is no such thing as a “language gene” or “gene for language”, i.e. a special gene that gives us this unique ability. Genes do not directly determine our cognitive performance or behavior. Rather, they contain blueprints for proteins, which in turn take on functions within the body's cells. Some of these proteins have significant effects on the properties of brain cells, for example by influencing how these cells divide, how they grow and how they make connections with other brain cells. These processes, in turn, are responsible for how our brain works, and that includes speaking and understanding language. It is therefore quite possible that evolutionary changes in certain genes had an influence on how the human brain regions are networked and thus played a role in the development of language. Changes in several genes, not just one “magical language gene”, are likely to contribute to this. There is no reason to believe that critical genes suddenly appeared out of nowhere in our species.

From a biological point of view, there is strong evidence that human language ability is based on modifications of gene networks that are deeper anchored in the history of development. A convincing argument for this is provided by studies on FOXP2 (a gene that is often wrongly portrayed in the media as the fabulous “language gene”). While it is true that FOXP2 is relevant to language, the role of this gene in human language was originally discovered because a disruption of the gene caused by rare mutations causes severe speech and language disorders. But FOXP2 is not found in humans alone. On the contrary, variants of this gene are found in astonishingly similar form in many vertebrates (including primates, rodents, birds, reptiles and fish) and it appears to be active in comparable brain regions in these different animals. Songbirds, for example, have their own version of FOXP2 that helps them learn to sing. In-depth studies of the different versions of this gene in different species show that it plays a role in the networking of brain cells. The gene has been in evolutionary history for millions of years without having changed much, but interestingly, just after the branching off to humans, i.e. after we have evolved from chimpanzees and bonobos, at least two small but interesting changes have taken place in FOXP2. Scientists are now investigating these changes to find out how they affected the development of connections in the human brain, one piece of the puzzle in exploring the origin of our language.

by Simon Fisher, Katerina Kucera & Katherine Cronin
Translated from English by Paul Hömke & Louise Schubotz

For further reading:

Revisiting Fox and the Origins of Language (link)

Fisher S.E. & Marcus G.F. (2006) The eloquent ape: genes, brains and the evolution of language. Nature Reviews Genetics, 7, 9-20. (link)

Fisher, S.E. & Ridley, M. (2013). Culture, genes, and the human revolution. Science, 340, 929-930. (Link)

Is there a quick way to expand my English vocabulary?

It is not easy to learn a new language, especially because you have to memorize so many new things. A best practice that applies to everyone probably doesn't exist, but some strategies can make memorization more efficient.

In order to build up a vocabulary in a new language, the most conventional type of learning is often chosen: a word in this new language is translated into the mother tongue and thus memorized. This works very well when both languages ​​are similar, such as Dutch and German. However, this method is too indirect to learn languages ​​that are very different from German (such as Chinese). A more efficient way of doing this is to omit this intermediate step: The new word is not first translated into your own language, but directly linked to the objects or actions it expresses. People who are fluent in a second language often even find themselves in the situation where they use words that do not even exist in their mother tongue. This makes it clear that sometimes it is not possible at all to learn words in a conventional way, but rather that they have been acquired from the context of the new language.

In order to be able to forego the intermediate step of translating at an early stage, it is helpful to visualize the context in which a word is used. In this way, we link the new word to a picture, imitating the way children learn a language. Another way to build your vocabulary faster is to create groups of words that are related to each other and then practice them together. For example, on the way to work, you can name everything that has to do with traffic and transport or everything that is on the table at dinner. The trick is to give meaning directly to the new language rather than understanding it through something familiar like the mother tongue. If you are advanced in learning, you can search for word meanings in a dictionary that does not translate the words into your mother tongue but describes them in the new language (e.g. the Oxford Advanced Learner's Dictionary for English).

There is a method that is called "Spaced Learning" in English. New knowledge (i.e. new words when learning a language) is introduced in a first block, repeated in a second block and finally checked in a third block. There is a 10-minute break between the blocks, whereby it is important that the learner completely distracts himself and does nothing that has to do with what has just been learned (e.g. short fitness exercises). Studies have shown that this combination of repetition and timed breaks can lead to long-lasting connections between neurons in the brain and that what has been learned is stored in long-term memory. These processes happen within a few minutes and have so far not only been observed in humans, but also in other species.

Of course, it is inevitable to forget something or make mistakes while studying. In the end, the more we use the new words, the better we remember them.

Sylvia Chen & Katerina Kucera
Translated from English by Katrin Bangel & Manu Schütze

Continue reading?

Kelly P. & Whatson T. (2013). Making long-term memories in minutes: a spaced learning pattern from memory research in education. Frontiers of Human Neuroscience, 7, 589. (link)

Do languages ​​become more and more similar over time or do they differ more and more from one another?

There is the following assumption regarding the change of a language: If a language community splits into two groups, which from now on have no contact with each other, the formerly same language will develop further and further apart over time. When both groups come into contact with each other again, however, the two languages ​​can come closer together again by taking over certain parts of each other. This is exactly what has happened in many parts of the world - certain parts of one language have been carried over into other languages. A current example is the word 'Internet', which has found its way into many languages. When global communication was not so easily possible, these borrowings were limited to languages ​​in the same or adjacent areas. As a result, languages ​​could become more and more similar. Linguists call this in English area effects (Eng. about "regional or spatial effects"). When approaching another language, sociolinguistic factors play a role, such as migration or that one group is stronger or more respected than the other.

Another reason to adopt words or rules from another language is when they are easier to learn or “better fit” the human brain. This is much like in biology, when different species develop similar traits, such as birds, bats, and some dinosaurs have developed wings.

It is now also possible that languages ​​from opposite regions of the world change in such a way that they become more and more similar to one another. Scientists have discovered features in words in several languages ​​that appear to show a direct relationship between the sound and the meaning of the word. For example, in many languages ​​the word for 'nose' includes letters that are also formed by the nose, such as "n". Many languages ​​use a similar word to question what someone said: "huh?" or what?". The reason for this is likely because it's short, sounds like a question, and you get other people's attention very quickly.

It is difficult to tell whether the words in two languages ​​are similar because they contain universal features (as described above) or because they are inherited from one another. Evolutionary linguistics tries to find ways to make these differences clear.

Evolutionary linguistics also answers a question that underlies all linguistic research: Are there limits to the diversity of languages? In early theories it was assumed that we are limited by our biology and can only process certain structures. However, recently, field researchers have been documenting more and more languages ​​that have a huge number of different sounds, words, and rules. So maybe it is the case that every time two languages ​​approach each other, they further differ in a different way.

By Seán Roberts & Gwilym Lockwood
Translated from English by Manu Schütze & Katrin Bangel

Some left

Can you tell the difference between languages? (link)
Why is it studying linguistic diversity difficult? (link)
Is ‘huh?’ A universal word? (link)

Further Reading

Nettle, D. (1999). Linguistic diversity. Oxford: Oxford University Press.

Dingemanse, M., Torreira, F., & Enfield, N.J. (in press). Is "Huh?" a universal word? Conversational infrastructure and the convergent evolution of linguistic items. PLoS One. (link)

Dunn, M., Greenhill, S. J., Levinson, S. C., & Gray, R. D. (2011). Evolved structure of language shows lineage-specific trends in word-order universals. Nature, 473, 79-82. (link)

When scientists speak of maternal language, do they mean mother tongue?

With maternal language Scientists mean the way in which children are spoken to. How parents or carers talk to children has been studied in linguistics since about the 1970s. Scientists studying language acquisition by children wanted to understand the impact the way they speak can have on language learning for children. Because it is mainly mothers who take care of children, the scientists initially concentrated on the "motherly language" (Eng. Also often Motherly called, engl. motherese). In general, this type of speaking often uses higher pitches, a wider pitch range and a simplified vocabulary. Scientists now know, however, that this is not always the case. For example, some mothers change their pitch range. Other mothers, on the other hand, use almost the same tone of voice when interacting with children as when talking to adults. In addition, the way the child speaks changes from month to month, as the child's language skills develop. Also, in some cultures, mothers sometimes use little or no baby language when talking to children. Because of this great variation, it is difficult to find a universal one mother tongue to speak. In addition, we must not forget that fathers, grandparents, babysitters, older siblings, cousins ​​and people who do not belong to the family also change their language when they talk to children. For this reason, scientists nowadays prefer to speak of "child-directed language".

Scientific studies show that when we communicate, we not only adapt our language but also our gestures and actions to the child. We perform these more slowly, larger and spatially closer to the child. Child-oriented language seems to be only part of a special child-friendly form of communication.

Emanuela Campisi, Marisa Casillas & Elma Hilbrink
Translated from English by Manu Schütze & Katrin Bangel

Continue reading?

Fernald, A., Taeschner, T., Dunn, J., Papousek, M., de Boysson-Bardies, B., & Fukui, I. (1989). A cross-language study of prosodic modifications in mothers 'and fathers' speech to preverbal infants. Journal of Child Language, 16, 477–501.

Rowe, M. L. (2008). Child-directed speech: relation to socioeconomic status, knowledge of child development and child vocabulary skill. Journal of Child Language, 35, 185–205.

What is the difference between surface and deep structure in language?

The terms surface structure and deep structure refer to different levels that information passes through during speech production. For example, imagine you see a dog chasing a postman. When you encode this information, you create a representation that includes three different elements: a dog, a postman, and the plot to hunt. This information exists in the speaker's mind as a deep structure. If you want to express this information verbally, you could produce, for example, a sentence like "The dog chases the postman". This is the surface structure: It consists of the words and sounds produced by a speaker (or writer) and perceived by a listener (or reader). To describe the same event, you could instead produce a phrase like "The postman is chased by the dog". In this sentence the order in which the two characters are named (the surface structure) is different than in the first sentence, but both sentences are derived from the same deep structure. Linguists assume that so-called movement operations are carried out in sentence formation, which convert encoded information from the deep structure into the surface structure. These movement operations are also called linguistic regulate designated. Linguistic rules are part of the grammar of a language and they must be learned in order to produce grammatically correct sentences.

There are rules for different types of sentences. Other examples of rules, or movement operations between depth and surface structures, are declarative clauses (You have a dog) and their corresponding interrogative clauses (Do you have a dog?). In this case the move operation involves swapping the first two words of the sentence.

By Gwilym Lockwood & Agnieszka Konopka
Translated from English by Paul Hömke & Louise Schubotz

Suggestions for further reading:

Chomsky, N. (1957). Syntactic Structures. Mouton.
Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press.

Technical aids are getting better at imitating human brain activity. Will we get to the point where implants (e.g. from brains) enable voiceless communication?

We currently see our brain as an information processor, comparable to a computer, which enables us to interact with an extremely complex environment. Theoretically, it should therefore also be possible to extract all information that the brain contains. That could be done with a so-called 'brain-computer interface' (dt. Brain computer interface) - a connection between the brain and the computer. By measuring neural activity and applying sophisticated computer algorithms and machine learning (in which the computer itself basically generates knowledge through experience), scientists are now able to extract and reproduce information. For example, scientists can process image data produced by the visual cortex. Such processes create a digital snapshot of what someone has perceived with their own eyes.

With regard to language, recent studies have shown promising results. With the help of imaging processes (which reproduced brain activity), scientists were able to determine perfectly whether their test subject was reading, hearing, or seeing the words of animals or tools, or whether they heard the sounds made by the animals or tools. More invasive brain imaging has also shown that we can extract information about the speech sounds people hear and produce. This research suggests that, possibly in the not too distant future, "neural prostheses" for speech may be developed. This would allow people to produce speech using a computer just by thinking about what they want to say.

When it comes to voiceless communication, however, the use of brain-computer interfaces (BCIs) is still a major challenge for certain reasons. First, human communication is much more than just speaking and listening. There are a number of other signals that we use to communicate (e.g. gestures, facial expressions, and other forms of non-verbal communication). Second, BCIs have so far been limited to extracting information from the brain. Inserting information into the brain requires a more direct manipulation of brain activity than we have previously known. Cochlear implants are an example where scientists have succeeded in relaying incoming auditory information directly to nerve cells in the ear so that deaf people can hear again. However, this low level of stimulation still does not come close to the full stimulation that the brain needs for daily communication. It is theoretically possible to read speech data into the brain as well as output it from the brain in order to enable voiceless communication.However, these processes are extremely challenging and require enormous further advances in the fields of linguistics, neuroscience, computer science and statistics.

Dan Acheson & Rick Janssen
Translated from English by Katrin Bangel & Manu Schütze

Continue reading?

[1] Schoenmakers, S., Barth, M., Heskes, T., & van Gerven, M. A. J. (2013). Linear Reconstruction of Perceived Images from Human Brain Activity. NeuroImage, 83, 951-961. (Link)

[2] Simanova, I., Hagoort, P., Oostenveld, R., van Gerven, M.A.J. (2012). Modality-Independent Decoding of Semantic Information from the Human Brain. Cerebral cortex, doi: 10.1093 / cercor / bhs324. (link)

[3] Chang, E. F., Niziolek, C. A., Knight, R. T., Nagarajan, S. S., & Houde, J. F. (2013). Human cortical sensorimotor network underlying feedback control of vocal pitch.Proceedings of the National Academy of Sciences, 110, 2653-2658. (Link)

[4] Chang, E.F., Rieger, J.W., Johnson, K., Berger, M.S., Barbaro, N.M., & Knight, R.T. (2010). Categorical speech representation in human superior temporal gyrus. Nature neuroscience, 13, 1428-1432. (Link)

[5] Bouchard, K. E., Mesgarani, N., Johnson, K., & Chang, E. F. (2013). Functional organization of human sensorimotor cortex for speech articulation. Nature, 495, 327-332. (link)

Why do we scream ‘au’ when there is sudden pain?

There are actually two questions hidden behind this question. To be able to answer the question clearly, it is best to divide it up as follows:

(1) Why do we cry out when we experience sudden pain?

(2) Why are we shouting ouch! ’And not something else?

Regarding the first question, it should be noted that screams related to pain occur throughout the animal kingdom. Why? Darwin, who wrote a book on emotions in humans and animals in 1872, thought it was related to the severe muscle contraction that is associated with pain in almost all animals. He saw this as a ritualized version of evading a pain-causing stimulus as quickly as possible. But that leads us to the question: why does the mouth open? Since then, research has found that screaming when in pain also has communicative functions: for example, to warn conspecifics of danger, to call for help, or to receive care from others. The latter function begins in the first few seconds of our lives when we scream and our mother caringly embraces us. Babies, in addition to the young of many animals, have a whole repertoire of different screams. In this repertoire, the cry of pain - the cry out when experiencing acute pain - is always clearly recognizable: it begins suddenly and is of great intensity and short duration. Here we can already see the contours of our ‘au!’.

And that brings us to the second part of our question. Why ‘ouch!’ And not something else? First we should look critically at the question. Is it really never anything else? Shout ouch! ’When you slap your thumb, or is it aaaaah !?’ In reality there is a lot of variation. However, this variation is limited. Nobody screams bibibibibi or vuuuuu with sudden pain. Screams of pain are variations of one and the same pattern. The pattern starts with an “aa” due to the shape of our speech apparatus when the mouth is wide open and sounds like “aau” when the mouth closes again. The word “au” sums up the pattern very well. And that brings us to an important function of language. Language helps us to judge experiences that are not exactly the same as similar. This is useful because if we want to talk about the way someone shouts, we don't have to exactly imitate that scream. In this sense, “au” is a word and not just a shout. Is “au” the same in all languages? Almost, but not quite, because each language uses its own inventory of sounds to express pain. In German it is “au!”, An Englishman says “ouch!”, And someone from Israel says “oi!” - at least that was what Byington wrote in 1942 in one of the first comparative studies on cries of pain. Each of us comes into the world with a repertoire of screams of pain and also learns a language. Language ensures that we can do more than just cry and scream - we can also talk things over. Fortunately, otherwise this answer could not have been written.

Written by Mark Dingemanse and published in “Kennislink Vragenboek”
Translated into German by Judith Holler, Katrin Bangel & Manu Schütze

Why do we need fixed spelling rules?

Imagine you are asked to write a letter to a friend using only numbers and punctuation marks. You may find a way to use the characters to represent the sounds of their language. But how should your friend decipher this new alphabet to read the letter? The two of you should have determined beforehand how words are to be written. As a result, you would have invented a common spelling together.

Just like the key to a code, spelling is a unified way of associating letters or other characters with the sounds of a language. Using the same spelling together means that the speakers of a language can also communicate with each other in writing. The shape of the characters does not play a major role. It is only important to understand how the respective characters (graphemes) represent the sounds we use in language (phonemes).

Many languages ​​use the same characters, but use them to express different sounds and therefore have different spelling. Both German and English spelling use characters from the Latin alphabet, and both languages ​​have many grapheme-phoneme matches. The relation between the letter 'l' and the sound 'l' are in the English word light and in the German word light basically identical. However, the letters 'th' correspond to different sounds in English (e.g. theater) than in German (e.g. theater). These rules must be learned in order to correctly pronounce the written word forms.

When it comes to the question of what constitutes good spelling, opinions differ. As a rule, all sounds that have different meanings should be represented in the spelling. Ideally, this is done with as few characters and rule conventions as possible. However, this ideal case rarely occurs. Let's take the English language as an example: The words 'pint' and 'print' hardly differ in their spelling, but are pronounced completely differently. However, small quirks and spelling inconsistencies also have their advantages. In this way, historical information is often reflected, linguistic-cultural relationships are clarified and different dialects are taken into account.

Lila San Roque & Antje Meyer
Translated by Manu Schütze, Franziska Hartung & Katrin Bangel

Continue reading?

Online encyclopedia of writing systems and languages ​​(link)
The Endangered Alphabets Project (link)
Script source: Writing systems, computers and people (link)

Why is English spoken so much in the world?

Compared to the other answers on this page, this question can be answered without going into language itself. English is regarded by many as a world language due to the importance of the former British Empire and the current influence of American domination in politics and business.

It is possible to attribute this to a linguistic explanation as well. So it could be that English is an easy language that can be learned fairly quickly. In English there are no articles (such as 'der', 'die' or 'das' in German), no complex word formation, it is not a tonal language (such as Chinese) and it uses the Latin alphabet, which basically describes individual sounds expresses exact symbols (or letters) in each case. In addition, because it is so widely used in movies, television, and music, English is easily accessible and easy to practice for many people. On the other hand, English also has an extensive vocabulary, very inconsistent spelling and many irregular verbs. In addition, there are difficult sounds, such as "th" and a large number of vowels, which make it difficult to distinguish individual words (e.g. 'love' and 'laugh'). Discussions about which languages ​​are easy or difficult to learn tend to go in circles because everyone has a different opinion about what is easy or difficult to learn.

The fact that English is a world language can also be explained historically. The United Kingdom (Great Britain, at that time still together with Ireland) was one of the first industrial nations. This enabled it to colonize underdeveloped countries faster than other European countries. The colonies of the British Empire were roughly a quarter of the earth at the maximum: North America, the Caribbean, Australia, New Zealand, many countries in West and South Africa, South Asia and parts of Southeast Asia. The United Kingdom established English-speaking governments, industries and manufacturing facilities in these areas, establishing English as the language of global power in the industrial age. The British Empire disappeared after the Second World War, but its importance was only passed on to another English-speaking great power in the 20th century. The cultural, economic, political, and military domination of the United States in the 20th and 21st centuries meant that English remains the most important and influential language. As the official language in business, science, diplomacy, communication and IT (English is used by most websites) this will not change anytime soon.

Gwilym Lockwood & Katrin Bangel
Translated from English by Katrin Bangel & Manu Schütze

What was language like at the very beginning of its evolution?

We do not know that! It's one of the greatest puzzles in language development. In contrast to stone tools or skeletons that petrify, we cannot study human language from different epochs directly. We have documents that are more than 6,000 years old and can help us find out what languages ​​were like relatively recently. However, new research suggests that people may have been using languages ​​as we know them for 500,000 years.

To get an idea of ​​how languages ​​might have evolved, scientists use model systems. These include, for example, computer models, experiments with test subjects or the study of languages ​​that have only recently developed, such as Sign language. So we can't be sure what the first language sounded like, but we can find out how communication systems are (further) developing.

A series of experiments examined e.g. how communication systems can develop. For a while, test subjects could only communicate using drawings. It was found that negotiation and feedback in communication are very important for the development of a successful communication system.