Misinformation in Machine Translation
From the dawn of the AI age, we have heard a lot about how generative AI has a tendency to produce false or misleading information – often with a swaggering confidence that leaves us inclined to trust what it says. This has opened up a debate around how we should relate to such information, and where we ought to draw our red lines for the use of AI to produce and publish written materials.
For example, should we pass laws making it a requirement to label all content produced using AI so that readers can exercise appropriate source criticism? How can we teach children and young people to use these tools with caution? When should it be okay to produce informational texts via generative AI? And what checks and safeguards do we need to put in place to protect our democratic processes and integrity?
The world of translation is not immune to these issues, either. AI is increasingly used to power machine-translation tools, and this means there exists a risk that machine-translation systems may invent information to get around tricky translation problems. What this means is that anyone who produces or uses translated content needs to carefully consider the dangers that using AI can pose, and the actions we can take to mitigate them.
Below, we look at this issue in more detail and consider some of the risks involved in the overuse of automated translation solutions.
News headlines mistranslated
A study looking at multi-lingual translation systems in Facebook found that machine translation has introduced misinformation into the news feeds of users. The study looked at posts translated from English to Tamil and found that up to 20% of generic translated news headlines did not accurately reflect the meaning of the source. This figure increased to 30% in the case of sarcastic or domain-specific headlines.
The paper distinguishes between incorrect translations – which may simply be translations that do not read well or which contain syntax errors – and translations that actually contain misinformation, meaning they provide a distorted view of the facts or opinion expressed in the original. The results suggest that machine translation sometimes plays fast and loose with the truth in favour of producing something – rather than just holding its hands up and recognising its own limitations. The risk, then, is that once this false information spreads, it can be hard to contain, and it may have real-world consequences.
This problem is further compounded by the fact that post-editors are often unable to dedicate time to confirming the veracity of translation outputs. This issue is recognised by another study published in Artificial Intelligence in HCI whose authors argue that it is “unreasonable to expect post-editors (PE) to devote the equivalent levels of time and effort on the MT pre-translated text as in traditional translation projects, given that PE tasks have relatively lower pay but identical, if not tighter, deadlines.”
Translations that break the law
In some cases, misinformation or mistranslations generated by automated translation tools can even cause their originators to wind up on the wrong side of the law. Perhaps one of the most striking examples of this occurred in Thailand – a constitutional monarchy with stringent lèse-majesté laws which prohibit statements deemed offensive to the king or the Thai monarchy.
In August 2020, the Thai Public Broadcasting Service posted a seeming innocuous post about the king’s birthday on Facebook. This was then auto-translated from English and the resulting text was deemed so offensive that the Royal Thai Police launched an investigation. Facebook ended up having to deactivate its auto-translate feature in the country over the debacle. Because of the harsh penalties in place for such statements, no media outlet in Thailand has so far revealed exactly what it was the mistranslated text said.
Added toxicity
As well as inaccurate and potentially offensive information, machine translation can also interfere with the so-called toxicity of a text, according to a study by Meta. Toxicity in this context means “offensive utterances and bad sentiments” and can include things such as foul language as well as sexist, racist, ageist or homophobic statements.
As part of its efforts to develop a single machine-translation model for 200 languages on its platform, Meta has spearheaded a number of studies into machine translation. The phenomenon of added toxicity is one of the issues it has identified – namely cases when the translation output introduces elements of toxicity that were not present in the original.
According to a paper presented at the International Conference on Learning Representations, added toxicity can occur both due to mistranslations and to hallucinations. In one example, the tool appears to struggle to translate the adjective “gangly” into Catalan. Rather than choosing an appropriate word with an approximate meaning, the system goes rogue and picks an offensive word which can be back-translated as prick, asshole, or a number of other even less palatable swear words.
Let the humans do the talking
None of this is to say that machine translation has no place. Indeed, many of the examples cited above represent pure machine translation without any involvement from human reviewers. The idea, then, is not to warn against all use of machine translation, but to cast some light on the reasons why these issues occur.
At FairLoc, we believe that we all need to consider more carefully when we use human translations and when we use machine-translation post-editing. By keeping human translators in the driver’s seat, we can be sure to swerve around these issues and ensure that the translations we produce remain accurate and true to the source.
What’s more, the FairLoc stamp serves as certification that a text has been translated by a human. It therefore allows readers to rest assured that the words they are reading have been carefully considered by an intelligent writer, and even though this doesn’t mean they will be free from errors – at least they will be free from hallucinations, misinformation and added toxicity.
Click here to learn more about FairLoc.
Deconstructing AI Speak
Ever since the launch of ChatGPT, AI-generated texts have been seeping their way into more and more corners of our lives. Many platforms have introduced automated text generation features to try and make things easier on their users – from dating apps to event ticketing platforms and social media websites.
But have you ever noticed there’s something distinct about AI-generated texts? They seem to have their own style – a manner of writing which is at once generic and sanitised, but also somehow devoid of spark or life. It can be hard to put a finger on it, but sometimes it’s just clear when the text we’re reading has been written by a robot.
Below, we take a look at three examples of AI-generated texts to try and deconstruct what is quickly coming to be known as AI Speak.
The coffee morning
We used the AI auto-generate feature on an event ticketing platform to create a description for a charity coffee morning. We asked the AI to make the text punchy, fun and inviting. Here’s what it gave us:
“Get your caffeine fix and sprinkle some laughter on it too! Join us at the Local Community Centre for a morning of coffee, good vibes, and making the world a better place. We'll be brewing up a storm of deliciousness at 24 High Street, UK. So bring your caffeine cravings and rally your crew—friends, family, and co-workers—for a feel-good fiesta that gives back. See you there, ready to sip, laugh, and do some serious good!”
Although the text is grammatically flawless and has a nice rhythm to it, the sense of enthusiasm feels massively over-done. Expressions like ‘caffeine cravings’ trigger our cringe reflexes, while ‘feel-good fiesta’ feels like an off-the-mark choice in the context. What’s more, the text feels very American in its style, ignoring the fact that the event is set to take place in the UK. Over all, the feeling conveyed is distant, false and unlikely to chime with the intended audience for the post.
The dating profile
We asked ChatGPT to write a brief bio for a dating profile for a man in his thirties who enjoys video games, cooking and hiking. We specified that the text should feel open and warm, but also humorous and witty:
“🎮 Level 32 Adventurous Cook 🍳
Hey, I'm [Name], a guy in his early 30s juggling a day job and a love for virtual quests, hiking trails, and whipping up culinary delights. When not in the gaming zone, I'm chasing sunsets or experimenting in the kitchen (warning: occasional burnt offerings, but I promise it's part of the charm).
When I'm not saving the world on my console, you'll find me hitting the trails, chasing sunsets, and occasionally getting lost in nature (don't worry, I have a great sense of humor and a reliable GPS). Looking for someone who appreciates a well-timed gaming reference, enjoys outdoor escapades, and can handle my kitchen escapades. If you're up for an adventure with a guy equally at home in a virtual world or on a hiking trail, swipe right!
Bonus points if you can beat me in Mario Kart—fair warning, I've got some serious skills. 🎮🏞️🍳”
Overlooking the excessive enthusiasm of the text, the first impression we get is that it reads more like ad copy. Rather than sounding like the voice of a real human-being, we feel as if we’re being addressed by a corporation with something to sell. Bearing in mind that generative AI programmes draw on datasets from multiple textual sources, we wonder if it sometimes has trouble distinguishing between these different styles of speech.
Drilling down a little deeper into the substance of the text, the ‘well-timed gaming reference’ remark feels like showing rather than telling, while the repetition of ‘escapades’ and ‘when not’ simply smacks of bad writing. Even if the text may seem initially impressive for having been generated out of thin air in mere seconds, we have to ask ourselves; what is its value, really? Would it entice you to swipe right, or would it simply disappear into the background like meaningless white noise?
The Twitter post
We told ChatGPT that we are a young professional in the languages industry, and we asked it to write us a clever and insightful tweet that we can post to impress our colleagues and peers:
“Diving into the intricate tapestry of language every day, where each word is a brushstroke, and every translation is a masterpiece. In the world of languages, I'm not just a professional; I'm a linguistic virtuoso crafting connections across cultures. #LanguageMagic ✨🌐 #Wordsmith”
The first thing that strikes us about this text is its excessively boastful nature and once more, we find its style more befitting of a company or a business rather than an individual. At the same time, there is no originality at the heart of the message. We asked the AI to come up with an insightful observation, but the words ring hollow and appear devoid of substance.
The problem
Perhaps the problem lurking at the heart of all three of these texts is also the one most obvious thing that defines them – they are all written by a machine, and it simply seems impossible for the AI to conceal this fact.
When we write as humans, we bring so much of ourselves to the page – our interests, our experiences, our sense of humour and our own unique personality, built up over a lifetime of interactions and exploration. Even when we’re instructed to write in a neutral and objective way, because we are all humans, and because language is our toolkit for making and understanding the world, whenever we write, we inevitably shine through in what we produce. This is something we are all used to and that we quite simply expect, whether we realise it or not.
What these three texts appear to show is that no matter the context or no matter what we ask for, AI just can’t seem to convince us that there is a human behind the output. This leaves the text feeling dead and hollow. It gives us that unsettling, uncanny-valley sensation where we recognise that the writing is technically passable – that the prompt has technically been executed correctly – and yet there is something missing. That something is the human touch, and it cannot be replaced.
The [Hidden] Green Costs of AI
There are two major technological revolutions taking place on our planet right now, both of which promise to completely redraw our world and usher us into a bright, bold new future. Artificial intelligence is developing at a rapid pace, and we are fighting hard to combat the effects of climate change as we transition into a green society.
Could it be that the first of these revolutions could also solve the first? Surely, now that we have AI in our arsenal, you’d think we must be dreaming up all kinds of ingenious solutions to curtail climate change once and for all? Well, sorry to say it, but you’d be way off base. In actual fact, the completely unfettered way in which AI has been unleashed onto the world is causing further damage to the planet and dumping fuel on the fire of climate change!
That’s right, artificial intelligence comes with a rash of associated climate costs, many of which are not immediately apparent to the average user, and which have received very little attention in the public domain until recently. This means that, in addition to the ethical concerns around replacing humans with machines, AI may also be an environmentally unsustainable choice.
In this article, we uncover a few of the hidden green costs that come with using AI, and reflect further on the most responsible ways to use AI going forward.
AI ‘drinks’ gallons upon gallons of water
Cloud computing may seem clean – even the very name conjures up idyllic images of fluffy clouds against a bright, blue sky. But every action we take online has a physical consequence in the real world. One such consequence is the use of water to cool down the vast server farms that manage our queries and make our online lives possible.
It is estimated that for a simple conversation of between five and fifty prompts, ChatGPT ‘drinks’ the equivalent of 500ml of water. That might not sound like much in isolation, but just think of all the millions of conversations that users have been having with ChatGPT since its launch. Not to mention all the interactions with image generation models and other AI tools, which are often used not with a concrete purpose in mind, but merely out of curiosity, or so that we can individually test the limits of this new technology.
The question, then, is, in a world where billions lack access to clean, potable water, can we really afford to waste even a single drop just to visualise countries in stereotypical human form, or to conjure up images of fantasy products that do not exist? Viewed through this lens, our frivolous use of AI and automated tools suddenly begins to look a lot more questionable.
AI fuels huge amounts of energy consumption
The huge amounts of energy consumed by digital infrastructure and software is something that has long since been overlooked. This perception has slowly began to change in recent times, especially since the release of Digital Sustainability: The Need for Greener Software earlier in 2024, which points out that the IT industry has a similar carbon footprint to the aviation industry. Other writers, such as Elin Hauge, for example, note that the world’s data centres use as much energy as the entire nation of Spain, or twice as much as Sweden.
Nonetheless, with the rise of AI, the amount of energy consumed by digital technologies has only been increasing. It is estimated that by 2027, the AI sector alone could consume about the same amount of energy annually as the Netherlands. In other words, AI is vastly pushing up energy consumption at a time when it is crucial that we rebalance our energy usage and find more sustainable ways to run our society.
And while it is perfectly possible that renewable and more sustainable energy sources will be able to accommodate this surge, it is equally likely that the rapid adoption of AI will seriously interfere with our targets for net zero and the green transition.
AI is driving a surge in e-waste
One of the major problems posed by our rapidly developing consumer society is the huge amounts of waste and, in particular, e-waste, that are generated. The constant stream of new phones, cables and hardware means that obsolete tech is increasingly building up, and a lot of this waste is not properly recycled.
The rise of AI is affecting this problem in two ways. Firstly, as hardware manufacturers race to capitalise on the demand for exciting new AI solutions, new AI-enabled phones and computers are flooding the market at the expense of perfectly functional existing tech. Secondly, as AI applications continue to multiply, the need for newer and more powerful computing infrastructure is sharply on the rise. This means that data centres are rapidly upgrading their computing hardware, generating vast amounts of e-waste in the process.
According to the World Economic Forum, 50 million tonnes of e-waste are generated each year, only 20% of which is ever recycled. This is a serious problem that we need to tackle, but due to the growth of the AI server market, it is estimated this crisis will only increase dramatically.
The solution?
All of this is not to say that AI must be stopped. As we have said many times before, the point of FairLoc is not to oppose AI. Technological development is an inevitable and important part of our future, and anything that can make our lives easier is to be welcomed with open arms.
However, just as we need to carefully consider the implications of AI on human jobs and creativity, we also need to think carefully about how rolling out this technology will affect our environment and the wellbeing of our planet. After all, any benefits to our working days and lives will quickly be rendered irrelevant if we fail to protect our world from climate change.
The solution, then, is more caution and more debate. Of course, it’s only natural to be swept up in the excitement of AI and all that it can offer – and to want to see more of it more quickly – but when it comes to such a vast and major change, it is simply irresponsible not to weigh the pros and cons before diving in.
Perhaps AI should not be made available to everyone. Maybe we need restrictions on its use, or on what kind of applications ordinary users can undertake? Or maybe what we need is some carefully considered policies, rules and best-use practices to guide us in our interactions with AI? The bottom line, though, is that whatever we decide, it should be on the basis of solid reflection and debate, and in the interests of all of society tomorrow – not just a handful of businesses at the forefront today.
What are your views on AI and the green transition? Share your thoughts in the comments, and feel free to get in touch if you’d like to continue the debate!
The FairLoc Manifesto
Human creativity is a wonderful thing. So wonderful, in fact, that we surround ourselves with it every day. From the insightful commentator we read in the paper to the witty podcast host who keeps us company on our morning commute. And it’s not just them – even the sides of cereal boxes, the copy on websites and tanoy voiceovers can provide a platform for the warmth and inventiveness of our fellow humans to shine through. All through the day, each and every day, we see flecks of ingenuity that remind us we are connected to a global community of other brilliant, wonderful and diverse people who understand us and can make us feel at ease.
So how do we feel now that we live in a world where an artificial intelligence is making incursions into some of these spaces? Automation and AI are of course fantastic inventions, and products of the sharpest human minds. They are speeding us up at the supermarkets, keeping us safe on the roads and liberating us from the monotony of routine tasks. All so that we can spend yet even more of our precious time connecting with each other through art, commerce and ideas.
But do we really want AI to replace those genuine, human voices that make up our everyday reality?
For many people, the answer is yes. In board rooms and office blocks across the globe, the argument is being made for the costs that can be cut and the productivity boosts that can be won. And yet many remain sceptical – suspicious of a world in which computers talk to other computers, and humans become like cogs in a wheel, writing prompts rather than poetry.
The truth is that these tools are not really intelligent in the human sense of the word. They are powered not by curiosity and creativity, but by prediction models that guess at what we want to hear. Rather than create from scratch, they recycle and regurgitate what humans before them have produced. What they do is re-creativity, not true creativity.
As all the debates around AI rage on, the translation and localisation industry finds itself on the front line. For a long time, there has been a general feeling that translators will be the first domino to fall. Their work is sometimes seen as mechanical and routine, something that the machines can easily do for us. For many, the time is now ripe to let the machines do the heavy lifting, with humans reduced to the role of mere reviewer instead. But reviewing is not creating. And translation is not routine. To us, this is a bit like asking the next Picasso or Frida Khalo to ruber-stamp computer illustrations, rather than to wield their own brush.
You see, the truth is that translators are just as creative and human as all the other voices that make up our world. They use common sense and an innate understanding for the original author to understand precisely what is meant in all its shades of nuance and subtext. They then set about completing a puzzle, slotting all the pieces of meaning into the framework of a new language – a language with different rules, rhythms, flows and feel. And they do so with warmth, humour, finesse and pizzazz. With a sense of care and compassion for the writer and the reader they help to connect. And a sense of professional pride in the role that they play.
To put it simply, translation is a creative and a human endeavour like any other, and while MT has its place, it can never replace the love and dedication that a living, breathing professional brings to the page.
FairLoc – Created by Humans with ❤
That is where FairLoc comes in.
FairLoc is born out of this struggle. It has been conceived in recognition of the fact that computers cannot replace humans, and in direct response to the frenzied embrace of all things automatic that has engulfed our industry in recent years. As professionals working within the localisation sector, we have seen how clients are too quick to embrace machine translation, and too unwilling to listen to our advice to the contrary. Even though they all inherently understand the value of human creativity, they have been gripped by the spirit of the moment which seems content to toss it overboard.
FairLoc’s ambition is to change all that. It aims to make a virtue of human translation once more. To help the industry realise that what was for many years the industry standard – indeed, the only way to guarantee a good translation – must reign supreme once more.
At the same time, FairLoc is not a commitment to never use MT or AI again. On the contrary, we recognise that technology lies at the heart of the localisation industry. But it is a tool to be wielded by humans, not a thinking mind in its own right that can take over the entire process. We believe that language is a human concept, creativity is a human ability and translation is a combination of both. So why would we hand that job over to a machine?
FairLoc’s mission may not be an easy one – it is never simple to go against the grain and call for reflexion in a time of great change and rapid development. But it is a necessary one, for if we sit back and do nothing, humans will be pushed out of our sector to the detriment of us all. Not just of the dedicated linguists who trained for years and work hard to do what they do, but of everyone who reads, uses and relies on their translations.
So for a better, fairer and more vibrant world not just for translators, but for us all, consider supporting us in our mission. Visit the FairLoc website today to learn more and get started.
And if you’re not yet convinced, that’s okay, too. You’re more than welcome to get in touch with us for a chat; we’d love to hear your views and see if we can bring you round to our way of thinking. After all, we’re only human, and we love a challenge!
But whatever your standpoint, I’m sure we can all agree that the world is a better place when we let the humans do the talking!
Translation Theories in Practice
FairLoc came into being in part to counteract a growing sense that translation is a mechanical practice that can easily be automated. But if you look at the history of translation as a craft, it has long since been entrenched in the world of academia and intellect. In the twelfth and thirteenth centuries, for example, the Toledo School of Translators was considered a European centre of culture and learning. At the crossroads of the Arabic and the European world, it was a repository of thought, knowledge and contemplation, attracting scholars from far and wide.
Today, translation students study extensive programmes at Bachelor’s and Master’s level which examine the works of academic heavyweights from Umberto Eco to Jacques Derrida and Susan Bassnett. These students not only consider the practical elements of translation, but they also reflect on how the act of translating intersects with other disciplines such as philosophy, art, culture, morality, ethics and history.
What this means is that the study of translation equips human translators with a number of tools and considerations that they can then deploy in their professional lives. These are skills that require a deep level of reflection and which no machine can match. To get a sense of what these entail, let’s take a quick crash course in the academic theory and practice of translation:
Domesticising versus foreignising translations
One of the key threads to run through academic translation theory is the difference between the domesticising and the foreignising approaches (sometimes also referred to as exoticisiation). In other words, to what extent should the translator change the text so that it conforms with the world of the reader, and to what extent should we allow it to remain strange, foreign and somewhat unknowable?
Sometimes domesticisation can zap a text of what makes it appealing. It can also ring untrue if the world conjured up by the translator grates against the imagined world held in the head of their reader. Go too far in the other direction, however, and a foreignising text can feel strange and impenetrable. It may even trigger accusations of cultural appropriation if certain terms or references are retained in a way that feels tokenistic or exploitative, or if their retention serves to other or to essentialise the foreign culture.
The best translations combine these two approaches in a way that gives us a new understanding of the foreign world, without reducing it to something simplistic or false. When deciding whether to domesticise or foreignise a particular excerpt or reference, human translators will consider dozens of different factors, from possible misinterpretations and reinterpretations to contemporary debates around cultural borrowings and the ethics of each choice.
The hermeneutic approach
The study of hermeneutics is associated with scholars such as Friedrich Schleiermacher, Wilhelm Dilthey, and Hans-Georg Gadamer. It concerns the study of interpretation and accepts that language is not a transparent medium for the communication of information as it is wrapped up in a web of cultural, social and historical factors.
In practice, the hermeneutic approach therefore involves not just consideration of the actual source text, but of the author and their historical and cultural context in order to tease out a more nuanced reflection of its many layers of intended and unintended meaning. However, the study of hermeneutics also recognises that the translator brings their own biases and preconceived notions to their interpretation of any text. Gadamer refers to this as a fusion of horizons, suggesting that the worldview of the translator inevitably influences the process of textual interpretation.
Human translators are aware of all these factors and can bring them into play in their problem-solving. For example, if something does not seem quite right, the translator can reflect on the author’s background and on their own worldview in order to enter into a dialogue with the text and iron out any kinks. This enables deeper meanings to be brought to the fore and for an altogether more considered and conscientiously crafted translation to be produced.
The translator’s (in)visibility
You may have heard the old adage that a good translation is one that reads like a native text. The idea is that the translator ought to disappear and leave no trace of themselves in their work. However, this notion was challenged by the American scholar Lawerence Venuti in 1995 when he published his book The Translator’s Invisibility.
In this publication, Venuti argues that because translation is necessarily an interpretative process, the translator should be afforded a more visible and active role. Rather than downplay the mediation of a translator, he encourages translators to make bolder and more deliberate choices in their work. This includes retaining elements that may seem foreign or challenging to readers, and which thus remind them that they are reading the product of another culture with its own complexities and idiosyncrasies.
Above all, Venuti’s work has reinvigorated centuries-old debates about the ethics and power dynamics inherent in translation and how different cultures and concepts of existence are represented across linguistic domains. These are complex yet important debates, especially in our current world which is at once globalised yet also steeped in misinformation and competing agendas. Professional human translators consider their position and their visibility carefully as they work, and make decisions on how to represent the words they have been entrusted to translate.
The human toolkit
All of these theories – and many more – leave the translator with a deep understanding of their craft and many tools they can use in their work. Now contrast this with the approach of the machine, which simply cobbles together a patchwork of old translations and uses datasets to guess at what word a human might use next.
It picks its words blindly, without any understanding of the wider cultural context from which its text has come or into which it is going to be inserted. Unlike a human translator, the machine has no sense of itself and its own biases, and it cannot reflect on these or mitigate against them in its work.
In short, the machine is a blunt tool that can hammer together a translation of sorts, but it is completely unlike the refined and complex human mind, with its knowledge of and appreciation for the ethical, cultural and philosophical considerations of translation.
So there you have it! We hope you have enjoyed this crash course in translation theory, and that it has given you a fresh appreciation for why FairLoc exists! To learn more about FairLoc, or the wonders of human creativity in translation, check out our About page here.
Will AI Make Us Stupid?
New technologies make our lives easier, but at what cost?
Ever since the industrial revolution – or even since the invention of the wheel! – we have been outsourcing familiar tasks to new technologies. In our modern age of convenience and automation, we can buy complete meals in the supermarkets, summon a taxi with the touch of a button or use an app to hire anyone from a plumber or an electrician to a dog walker or a therapist.
What’s more, we are standing now on the precipice of an even easier and more automated future – one replete with driverless cars, self-cleaning homes and AI assistants who can draft our emails, plan out our days and even do our jobs. History shows us that even if these changes can be disruptive in the short-term, they improve living standards and quality of life in the long-term – but is there a line that can be crossed?
At FairLoc, one of our ambitions is to create more dialogue about what happens when we entrust technology with the jobs we have long since done for ourselves. Especially jobs in the creative and cognitive industries that call for a bit of wit, warmth and an understanding of the human condition. What will happen to us as AI begins to take on more and more of these tasks? We explore this question below.
The tides of change
In Spain, a hugely popular television show reached its end in November 2023. Cuéntame cómo pasó (Tell Me How It Happened) was the story of Spain’s recent history through the lens of a single family. In its first season, set in 1968, one of the recurring jokes is the reaction of the family’s grandmother to all of the new mod-cons available in their fledgling consumer society. After vowing never to watch the TV before it enters the home, she soon becomes addicted. Sceptical of the washing machine at first, she is soon proclaiming with quasi-religious fervour that it has revolutionised their lives. From the first family car to the hallowed sowing machine, the joke repeats to the delight of audiences from another age.
And indeed, watching from our perspective today, it is outright hilarious to think that anyone could ever have mistrusted these staples of modern life. Even harder to understand is how we ever could have lived without them – and all the inventions that have come since, from dishwashers to tumble dryers and smartphones to the microwave. Simply put, our lives today are anchored by machines and technologies that relieve us of hard manual labour and tedious, time-consuming tasks.
But at the same time, it seems that many of us also carry around a sense of what we’ve lost over the years. Despite technology taking jobs off our hands, our lives have nonetheless grown busier, and we have come increasingly to rely on conveniences that can help us out. As a result, simple skills like knowing how to mend a shirt, fix a leak or even cook a meal have been lost by many of us. This can sometimes leave us feeling disconnected from our heritage or traditions, and in times of inflation and financial hardship, it can leave us feeling lost, unsure or backed into a corner.
What happens when we outsource creativity?
The march of technological development has not abated, and today we are witnessing the onset of some major advances in the world of AI. This technology appears to promise a world in which we are not only able to outsource manual labour to automated systems, but even cognitive and creative tasks. Since the launch of ChatGPT, we have seen this technology used to write reports, websites and blog posts. It has been integrated into online platforms like EventBrite and used to power AI teachers and companion chat bots, shaking up a wide range of industries from copywriting and translation to education and marketing.
But what will happen to us when we start offloading these kinds of cognitive tasks? Will our abilities to write, read, analyse information and to engage in problem-solving go the way of the old manual tasks we never do anymore? Could there be a risk that we might become complacent and lazy – that our brains might grow lethargic from lack of exercise, to the point that they are no longer able to perform these routine tasks? And what would that mean for our daily lives, our sense of purpose and even our identities as thinking, productive and creative beings? It seems the deeper we dig into this quandary, the more questions we uncover!
Different schools of thought
So far, there are different schools of thought on what the future might hold. Some writers have considered the idea and still believe that allowing AI to take over our cognitive tasks will ultimately be beneficial. Writing in the Guardian, Simon Winchester suggests that by allowing algorithims to purge our worlds of unnecessary information, we will be able to free up more space for creative and inquisitive thought. This could even lead to the emergence of the next great philosophers, he argues.
Meanwhile, Pankaj Chawla argues in Forbes that the impact of AI will be much like that of Google. Upon its launch, there were similar fears that over-reliance on the search engine would lead us to become lazy and unimaginative. The reality, however, was that Google became an important and now essential tool of investigation and discovery, actually boosting our creative capabilities rather than diminishing them.
But even so, there are others who warn that an over-reliance on generative AI may undermine our own capacities over time. Nir Eisikovits, for example, warns that our ability to make judgements may decline the more we entrust AI to make decisions for us. What’s more, AI also threatens to remove serendipity from our lives and could hamper our aptitude for critical thinking. Nate Anglin, meanwhile, warns that AI offers us an easy route around challenging tasks that effectively switches off our imagination and makes it more difficult for us to develop sparks of genius. In short, the worry is that by reaching for the easy AI option before doing some mental legwork of our own, we will undermine our own general intelligence and cognitive skills.
FairLoc and the future
From our current standpoint, there is still a lot we don’t know about how this technology will unfold. But as we ponder these questions and considerations, we can’t help but feel it is worth remembering that language is a human concept. It is not only something that springs organically from the human experience, but also a lens and a filter through which we understand and experience the world. As a tool and our most central mode of expression, it is also tightly linked to our own personal development and sense of self. So it is therefore well worth asking questions about what happens when the language we consume and produce comes not from ourselves but from machines?
Far from wanting to be the sceptical technophobe who shies away from new developments – and like the beloved abuela from our Spanish TV show, who becomes the butt of jokes for future generations – we are keen to embrace new technologies in all shapes and forms. But at the same time, we also want to encourage dialogue about what they will mean and what place they should have in our society. Past changes have altered our lives for better and for worse, and as new technologies encroach into the realm of the human mind, we think it’s well worth wanting to root out the worse. That is one of the reasons FairLoc was created, and together with your help, our ambition is to ensure that AI allows us to continue marching forward instead of stepping back.
So no matter whether you think AI will make us stupider or smarter, slower or faster, better or worse – there is a place for all of us in the conversation, and plenty of good reasons to believe that we should let the humans do the talking! Learn more about how FairLoc works here.
Who owns translator data?
Data is invariably described nowadays as the new oil. We know that everyone from supermarkets and social-media platforms to newspapers and foreign governments want to collect, collate, sell and use our data for all sorts of different purposes. And we understand that the data we produce has value – whether we’re airing our thoughts on social media or choosing between tubes of toothpaste in the corner shop.
Yet at the same time, we often fail to appreciate the value of data when we’re on the other end of the equation. When we see the fruits of data that has been harvested. Take machine translation, for example. When we put text into the likes of Google Translate or DeepL, we tend to attribute authorship to the machine, as if it had a mind of its own, and we leave it at that. But the truth is that this output is actually data – data originally produced by human translators, tweaked by reviewers, fed into an algorithm, monetised and recycled in a new form.
The same is true of generative AI chat bots that produce output based on human datasets. And to some, the way these tools work is tantamount to plagiarism or copyright theft. After all, they take texts written by humans and reuse them without giving credit or paying royalties. In the world of arts and literature, many have already spoken out and several lawsuits have been launched. In the translation sector, the issue has opened up a Pandora’s box of tricky questions around who actually owns the legal and moral rights to use and profit from decades of human translation work.
How these debates pan out will have huge implications for the future of the industry – so let’s take a closer look at how MT works, how human linguists fuel it, and what we can do to keep things fair.
Manteau, pelage ou couche?
Primitive forms of machine translation worked on a simple identify-and-swap basis. Dictionaries were matched together and one word in English would be replaced with its equivalent in French, for example. This was very limiting, however, because word X in any given language might have multiple different translations in its new language. The word for coat, for example, might be different depending on whether we’re talking about a winter coat, an animal’s coat or a coat of paint.
Today, modern MT tools are much more advanced – they use neural technology and machine learning to more effectively work out which word is statistically most likely better to be correct, and to better structure sentences in a more natural way. By considering related words in the broader text – like wall, paintbrush, winter, stoat and so on – they can better divine exactly which word to use in each case.
This is possible because MT tools are trained on datasets – huge corpora of parallel texts made up from millions upon millions of words and how they have been translated in the past. Often it is unclear exactly where these datasets have come from, but they can be acquired from brokers or via webcrawlers which scrape text from the internet. Training MT tools on these bilingual corpora allows them to identify patterns and learn what words tend to go together.
What this means is that machine translation is impossible without human translation, and that renewed human input will always be necessary given that language is a natural and organic phenomenon. It is not something fixed and captured in the existing corpora – it evolves and changes on a near daily basis. Without humans continuing to translate neologisms and new slang words like chillax, staycation, skimpflation, fam, deep-fake and literally thousands of others, the machines will never know how to translate them.
So what becomes clear then is that machine translation tools do not actually produce anything out of thin air – they use algorithms and prediction models to cobble together existing translations and produce fresh text for whatever input is fed in. But with so many people involved in making this possible, who actually gets to take credit for the output? The machine itself? Its developers? Or the translators who did all that mental legwork in the first place? How do we decide who has the right to use, sell and profit from the information fed into and out of machine translation tools?
‘Systematic theft on a mass scale?’
Although these issues have been current within translation for a while now, they have recently received traction within broader media debates following a number of notable advancements within AI. When ChatGPT was launched at the beginning of 2023, many of us marvelled at what it could do. But before long, some began to ask questions about the ethical and legal aspects of training chat bots on existing content.
For example, the US Authors Guild launched a class-action lawsuit against OpenAi, accusing it of “systematic theft on a mass scale” for using their work. Other artists also made notable headline-generating interventions, such as Nick Cave, who branded generative AI an exercise in “replication as travesty”, and Sting, who cautioned that we need to be wary of how we use the tools. Illustrators, meanwhile, have launched the hashtag #NotoAIArt, pointing out that image-generation bots replicate their style and designs without giving them any credit or paying royalties.
In response to some of these concerns, OpenAI announced in September 2023 that it would enable websites to block its webcrawler from scraping their content. Many publications, including the Guardian and the New York Times, in addition to big e-commerce platforms such as Amazon, have since chosen to avail of this option. However, this only concerns future content, and does not allow for the removal of materials from existing datasets, which continues to be something of a legal and ethical black hole.
Translation and copyright
Many of the artists, publishers and illustrators affected likely have legitimate claims against how AI companies have been using their data, and some of the cases being tried now may shape the future framework for generative AI. Translators, however, are in a stickier position.
Part of the problem here is the sheer number of parties involved – in many jurisdictions, translators own copyright to their work only as derivative texts, meaning the original author retains a say. In practice, moreover, translators mostly cede their authorship rights to the agency that hires them, allowing them to reuse translation data to offer client discounts. So by the time technology gets involved, there are already three parties with an ownership stake in the content, not to mention any proofreaders or client reviewers who may also have helped shape the text.
Adding to this complexity is the lack of transparency along the chain. It is not possible to take a piece of MT output and reverse engineer it back to the translations it has drawn from or used. It is impossible to link back to an individual agency, never mind an individual translator. This makes it difficult for linguists or agencies to prove their content has been used without authorisation, and it complicates the idea of any kind of royalty scheme that might seek to compensate linguists for their data.
Towards a fairer future
Yes, data is indeed the new oil and even within the translation industry, it is fuelling new possibilities and shaking up revenue streams for many, from linguists and agencies to big tech companies entering the market. As we have seen, the question of who owns translation data is a thorny one with no clear answer, so instead perhaps we should be posing the question in a different way – how can we use this data fairly?
Today, many translators feel like they are getting a raw deal. Like turkeys voting for Christmas, they know their data has helped to fuel and refine MT programmes. Programmes which are fantastic and incredibly useful, and which could be an invaluable boon to the entire industry – but which, at the same time, are currently reducing translator earnings and, in some cases, muscling linguists out of work altogether.
We are standing on the frontier of a new world in which AI and automation will play increasingly fundamental roles in our lives. As we cross the threshold and move further into this new reality, it is important we remember that unlike oil, data is not a raw material to be mined from rock and earth. It is the product of hard-working and creative humans – and no matter what we do with it, we need to be fair towards those who made it possible.
AI Apocalypse! Imagining a World of Re-Creativity
Another day has broken! You wake up once more without hardly realising it, gently nudged back into consciousness by your electro-pulsating wristwatches and sunrise lamp, all calibrated to perfection based on years of harvested data. You walk into the kitchen and with the usual familiarity, you turn the dial on your breakfast dispenser and watch as your app-composed mix of grains and seeds dispense down into a bowl.
There’s something about the colours that catch your eye and make you think of Stacey, the girl who used to sell granola at the local farmer’s market. She was so friendly, and always coming up with fun new flavours. The market has been closed for years now, ever since everyone made the switch to intelligent portioning. You wonder briefly whatever happened to Stacey but shrug off the thought as you switch on your tailored AI news feed.
The quasi-humanoid voice begins reading out its sequence of generated stories tailored to your interests. Although it mimics the style and cadence of the old news readers, it still sounds somewhat hollow and you find yourself tuning out. You did catch the first story though, something about serious disruptions after the new self-driving software update finally launched. It seems cars began communicating with the supermarket check-outs and both networks had to be shut down when they started inventing a new language. That old chestnut again!
After exactly five minutes, the update ends. No music, no conversation. You miss the old local station – which still exists, of course, but it hasn’t quite been the same since they replaced their DJs with AI models to cut costs. You make a joke about the headline story to your home assistant – something about him talking to the supermarket checkouts and maybe blagging you a discount. But he doesn’t understand and starts listing the addresses of nearby supermarkets instead. “Stop, stop!” you screech. So much for the new banter module you installed last month!
It’s time to focus now, anyway. You activate your HoloScreen and log into the email app. Tonnes of emails have come in overnight from the Mexico office. As per company policy, you have your AI assistant synthesise the content to save time on going through them all. You then ask it to synthesise a string of smart replies based on a keyword analysis and the company’s communications policy. It’s been a while since you last read the policy document, you think to yourself, but then you remember the last draft was generated by AI in code anyway. It’s really just meant for the machines nowadays.
Once the drafts have been generated, you are tempted to hit send without checking them over first. That would mean clearing your whole inbox in under five minutes, giving your productivity metrics a much needed boost. But you glance over the text out of habit and notice a reference that doesn’t look quite right. Something about riding around on top of a chicken?
You think for a second and then you remember – isn’t that an idiom in Spanish? Montar un pollo – it means to make a big fuss. It used to pop up all the time in the early days of auto-translated emails, you remember. You learned so many Spanish phrases and sayings back then out of sheer necessity, the machine translation software just didn’t know how to handle them and so you had no choice but to learn what they meant.
You take a closer look at the text and realise it concerns the new product launch next week. That’s Marta’s project, you think to yourself, and you remember fondly the email exchanges you used to have. You used to help her with her English and she told you in unnecessarily details about the latest exploits of her many cats. Sometimes she even attached photos! It would be easiest just to pick up the phone and untangle this mess the old fashioned way, you think to yourself. But that would be a breach of the company’s AI efficient-use policy …
You bring up her contact card and hover the cursor over the dial button. You hesitate for a minute. If you get caught up in conversation, that could really harm your productivity score for the morning, and you can’t afford another black mark against your name. The government is still in gridlock over the new universal basic income bill, and the labour market is in free fall. This job is all you have.
A warning flashes up on screen – your inbox should be cleared by now, and a new email has just come in. You close down Marta’s contact tab and hit send on the AI-generated drafts. It’s going to be a long day ahead.
////
The above is our imagining of a plausible future scenario, told with what we in our native Danish would refer to as a ‘sparkle in the eye’ – or a touch of tongue-in-cheek humour, as a human translator might put it.
Of course, nobody knows with any certainty what the future will look like, and at the same time, there is no doubt that artificial intelligence holds the potential to vastly improve our lives. It can liberate us from mundane tasks and help us see patterns and trends that elude our cognitive capabilities. From curing diseases and streamlining processes, there is a lot of potential in what AI can offer.
But right now, we find ourselves in a transitional phase where we are trying to figure out what we should entrust AI to do and what we shouldn’t. We see that AI can mimic a lot of human tasks, including some creative ones, and we are trying to decide where to draw the line. The future we present above is one in which that line ends up drawn just a little too far out from where it should be.
At this point, you might be thinking that some of the examples seem far-fetched, but did you know that AI radio DJs already exist today? The quip about machine-translated idioms is taken directly from our own experience, and ChatGPT has already been adopted as a tool for drafting emails. In fact, we could have gone even further – in the last few years, generative machines have had books published, produced award-wining art and even made Homer Simpson into a post-modern recording artist. We are seeing a re-creativity explosion at the minute, as we stand on the precipice of a murky new age.
Could you imagine a future in which all of this becomes the norm? Imagine sitting down to watch a drama series tailored to your interests, scripted by a bot and acted out by AI versions of your favourite actors? A television show that only you have seen, that you can’t share or talk about with anyone else?
Right now, new developments are stirring up conversation and debate, with different views taking shape on both sides. The worry is that they will also edge out human producers – a concern not just because it means job losses and poorer working conditions in the short term, but because it means a world of re-creativity – old human input recycled by machines, without any spark or wit. A world without friendly voices on the radio, banter with colleagues and yes, even those small interactions with local producers, artists and business owners at the farmer’s market.
Translation is on the front line of some of these changes, representing what many perceive will be the first domino to fall. It has often been said that the best translations are the ones you don’t notice – the ones that seamlessly convey language without drawing attention to themselves. Perhaps it is for that reason we might think that machines are fit for the task. But all language is human, and translators form part of the wide and varied patchwork of human voices that inject our world with colour and warmth. Without them, we risk sleepwalking into a future that looks something like the one above.
That is why we created FairLoc. That is why we should let the humans do the talking! To learn more, click here.







