fbpx

Central European Diacritic Letters: A Typewriter Buying Guide

The rise of national languages in Central Europe led to the expansion of diacritic letters, unique to a few groups of them. Nowadays, Unicode has given us an easy computational solution. But the fight for diacritic inclusiveness lasted for decades.

old typewriter detail
Photo: iStock.com / RobertKovacs

The introduction of computing was a revolution with consequences not only in information distribution but also in culture and politics. Though the race to corner the market of computing systems was fierce – and global, the Western-developed PC prevailed. But its unification was far from over, with Central European languages being one of the obstacles. Namely: their language quirks, aka diacritical marks.

The awkward sounds

In most of Central Europe and all of Western Europe, the Latin alphabet reigns supreme. The border between languages using Latin script and those to the east using Cyrillic can be described as political. But then again, so is the usage of certain characters.

Take Czech and Slovakian. When the time came in the 18th and 19th centuries to standardize national languages, carons, or wedges over the letters, came in handy. Had Czechs and Slovaks resigned from using them to indicate their ‘sh’ and ‘ch’ sounds, they would have fallen too close to Polish, with similar sounds written as ‘sz’ and ‘cz.’

That would have been a threat, even blurring national boundaries. Especially given that Czechia was, at the time, filled with German-speaking elites, and written/literary Czech was only in the beginning stages of a renaissance after having been relegated to peasant status for centuries. Similar was the case of Lithuanian – a language sent to the back burner when Poland and Lithuania were joined and used Polish as the lingua franca, and then returned to the forefront years later after they split.  

Some argue that Cyrillic is a script tailored precisely to express the sound of Slavic languages. This may also hold true for the Latin alphabet and Western languages. But what about those in the middle? There was a quest for something in between.

Numerous sets of diacritic letters

Many languages across Europe have diacritic vowels. But in Central Europe, there are a lot of diacritic consonants. Take the Czech and Slovak languages again. They feature all kinds of accented vowels and umlauts, but also č, ž, š, č, ř. Some of these diacritic letters also appear in Baltic languages. And there are others. Romanian has ș and ț. Then there’s Polish with ł, ń, ś, ż, ź.

Now it’s easy to see why the people of Central Europe bless the invention of Unicode. Before there was one computer set of characters, there was very limited access to diacritic letters on public computers and displays. When personal computing took a sharp rise in the West, no one really cared about these issues. The standard encoding, called ISO-8859-1 or ISO-Latin-1 (did someone say ISO-Lation?), was not very rich but still included most characters for not only English as well as French, German, and Scandinavian languages, as well as those from the south of Europe.

This was still problematic for the people of Central and Eastern Europe because the lack of diacritic characters can lead to a lot of confusion. For example, in Polish, “jezyk” can mean either “język” (tongue) or “jeżyk” (little hedgehog). And the latter spelled the same but written with “rz” (“jerzyk”) is a bird, not a mammal. Estonian lõhe’ is ‘salmon’, but ‘lohe’ is a ‘dragon’ (“Make sure you get that one right before ordering a kilo of salmon in the shop,” – urges one Internet user.) And the lack of an umlaut over the ‘a’ in “habe” turns pubic hair into a beard (ouch!).

Fight for your diacritic characters

To correct this, countries from Central Europe fought to popularize another form of encoding. In the 1990s, ISO-Latin-2 was slowly being introduced – at various paces – into different systems, followed by the so-called Unicode (note the “Uni” part!) a few years later. Even so, compatibility issues arose with the advent of the World Wide Web, which required both author and receiver to use the same encoding system to get the diacritic letters right.

It may therefore seem like computing introduced the problem it could finally solve by introducing ISO-Latin-2 and later Unicode. After all, when people only had a fountain pen and a sheet of paper, no one had any problem with diacritical marks, right?

Typewriter? A pain in the… fingers

True. If you forget all about the decades when it was a massive struggle to buy a decent typewriter. Now that this beautiful clicking device is merely a collectible, it looks like a minor headache. But a few decades back, when it was a tool of the trade for dozens of professionals, it was more of a pain in the ass.

As an example, my mother’s Czechoslovakian typewriter with Polish typeface, the 1962 Consul, had all the Polish diacritic marks. However, apart from the two most popular (Ł and Ż) – they were all only in lowercase. But that was the reality of the suboptimal supply of consumer goods on Socialist markets. “You can’t always get what you want,” – so why don’t you just settle for a workaround? Say, instead of using an actual uppercase Ł, type L, then backspace, and cross the L with a slash. Simple, right?

Considering all this, it’s easier to understand the old joke about technical improvements that made life easier if you work hard enough. But what’s also visible is that European unification is much, much more than some paperwork to sign.

Przemysław Bociąga

is a Polish journalist and essayist based in Warsaw. An anthropologist and art historian by education, he specializes in combining cultural phenomena with compelling narrative. He has authored and co-authored several books covering lifestyle and history. The most recent of them is “Impeccable. The biography of masculine image”. He has contributed to many leading magazines, both in print and online, and teaches cultural anthropology to college students.

Latest from Blog