Categories
Uncategorized

The Matrix Has You

What if I told you that, like almost everyone else you know, you are likely a slave. You are trapped in a prison you cannot smell, taste or touch. A prison for your mind.

You’re reading this because you know something. What you know, you can’t explain – but you can feel it. You’ve felt it for years – that there’s something wrong with the world. You don’t know what it is, but it’s there. Like a splinter in your mind – driving you mad.

But unlike the Wachowski sisters’ Matrix, you weren’t born into this. Instead you silently opted into the largest sociology experiment ever created. You didn’t know that’s what you were doing – but it’s what you did. The prison you reside in is present on your smartphone, on your smart TV, your smart watch, smart thermostat, smart gym equipment, smart mattress. If it’s smart, likelihood is it’s all part of a gigantic cage made to your exact specifications.

Your bank, your health provider, your government. All these things conspire to trap you. And it sounds insane, like a conspiracy, but it’s terrifyingly real. You almost certainly already know information about you is used by services attached to all these things. Quite a lot of them wouldn’t be able to work without knowing something about you. What you maybe don’t understand is how they work together to trap you.

Like any manipulator, the process of entrapment is steady and calculated. I’ll be outlining three major forms this takes: influencing your movement and actions, disrupting your relationships and reshaping your identity, and finally, training you to actively resist breaking out of the system.

So you have a choice. You can stop now, go back to your day, lead a normal life.

Or keep reading, and we’ll see how far the rabbit hole goes.

Nearly 99% of test subjects accepted the program as long as they were given a choice

A recurring theme here will be the overlap of using and being used; a double movement of influence. A lot of the ways online networks affect your actions seem quite benign. You share your location with google for helpful directions. Instagram shows you a popular viral popup store and you decide to go. A location-based mobile game tells you of a high-value item nearby. You haven’t been forced to do anything, you were just interested in something and were given some gentle suggestions.

However, if you used Google’s “live view”, your camera feed is used the update their street view data. Meta creates a profile of you to know the exact time and place to show you an advert, or a post that is also an advert, to influence your actions off-platform. Niantic, the company that makes Pokémon Go, allows businesses to purchase the placement of significant game items near them to guarantee foot traffic.

But I’m not telling you anything new here am I? This is just business, nothing comes for free. If you don’t pay for the product, you are the product. This is the era of the platform economy.

The Platform

The word “Platform” is an interesting one – both a constant presence in our daily lives and nebulous. A noun for a non-space, a void awaiting your presence. It’s worth digging into, and its modern usage was born shortly after the internet.

When the web really started taking off in the 90s, it was very normal for users to build their own websites. It was DIY. But not everyone had the required resources to host them. When it came to making financial transactions almost nobody did. This is why eBay became so successful. It provided a service for other people to use, and took a small cut of the profit. Thus the modern meaning for platform was invented – an online space that exists as a for-profit entity that provides the illusion of a public service. A digital mall, where you’re free to do what you want as long as you follow the rules and can afford the cost of entry.

Now almost the entirety of our online lives are spent mediated by these platforms. The social ones are the most overtly present in people’s lives today; finely honed attention-monopolising machines. They use gambling mechanics to wire your brain into a Pavlovian feedback loop of addiction; trying to keep you logged in and paying attention to them as long as possible. This is in the platform’s interest because it makes them money. And in doing so this two-way relationship begins between yourself and what you consume. You are what you eat.

Machine learning, fundamentally, is very good at a specific kind of problem solving – categorisation. It doesn’t matter what political affiliation you have, whether your beliefs are strong or weak, or how “objective” you think you are. The only thing that matters is how an algorithm can fit you into a neat little niche and get you to stay there. It’s not that exposure fundamentally changes you at any level… It just sands the corners off your personality. Gently encourages you to fit into your perfect niche.

Frames from Junji Ito's "the enigma of Amigara Fault". " Th-This is my hole! It was made for me!"

The insidiousness of this is, no matter what position you take, you can be neatly pigeonholed somewhere for people just like you. Strongly politically aware or completely ambivalent. Right or left, authoritarian or anarchist, cats or dogs. And if there isn’t a category for you, you’ll be slowly nudged into the one that is your closest match.

This is done at a variety of levels. The most obvious is targeted ads – targeting you by location and interest, calculated to reach you at your weakest and most susceptible moments. There’s also your algorithmic feed, showing you whatever it can to hook you in. Whether that be outrage, an amusing dance, a tempting argument, a piece of clickbait. It produces an intoxicating cocktail of dopamine and cortisol designed to make you feel, as much as possible, that your online life is your life. It doesn’t end there either, it seeps out into legacy media and ‘real’ life. News outlets need to rely increasingly on clickbait tactics themselves to succeed. And the ‘culture war’ seems to worsen in proportion with it. Politicians are now way more scared of bad PR than they are of doing the right thing.

And all of this funnels you, ever so gently, along algorithmic fault lines. Surrounded by caricatures of your own opinions, positions, likes, dislikes. And the more you’re exposed to it, the more you are encouraged to express yourself similarly.

Sometimes the fault lines start crossing the family dinner table. And the more isolated you feel, the more you turn to a place of familiarity. A place where you know you’ll have a sympathetic ear.

This isn’t an echo chamber, it’s much more insidious than that. You’re in a cell of an elegantly constructed panopticon. One that feels like the life you chose, It’s just all the edges have been filed off to make you easier to predict. To better fit you into the standard model for your category. Not even its architects clearly see the floor plan. And as long as you act the way they want you to (locked into the platform, pleasing their adspace and security customers), they don’t care to. In fact, the way these algorithms work, it may never be possible to see the invisible pigeonholes people are filed into.

Beyond the silent process of categorising and isolation is an even more insidious force. The intricate mechanics of the digital economy also trains you to flatly deny that the cage even exists. Even when your face is pressed up against the bars.

I See Six

In George Orwell’s 1984 there is a scene in which Winston, the protagonist of the story, is mercilessly tortured. His inquisitor, O’Brien, is ensuring he has total ideological control over Winston. He holds up four fingers and asks Winston how many he sees. The correct answer is however many “The Party” (in essence the state) tells him to see. Eventually, the pain causes his senses to present him with the illusion of innumerable fingers – and he can only say “in all honesty I don’t know”. When he gives this, the desired answer, he is administered a painkiller that floods his body. He feels instant gratification and love toward his torturer.

It is facetious to claim that we are exposed to that level of direct manipulation, but this scene illustrates a pattern of domination. It’s reminiscent of the techniques employed in coercive relationships and cult indoctrination. A very common trait of this indoctrination is exploiting and/or fostering a disorganised attachment style. I won’t go into excessive detail but the basic idea is that you isolate your target and make yourself both the source of traumatic events and comfort. Doing this takes away a sense of stability. Without a clear way to live in a safe and stable environment, the victim has nowhere else to go. And this partners well with a community that reinforces or normalises the victim’s relationship with you. So the victim submits, resisting is too painful and life is so much easier doing so.

It only take a small extra logical leap to see how this relationship pans out in the online sphere. The relationship between usage of social media and poor mental health outcomes are very widely documented indeed. Whilst it would be unscientific to claim that user relationships work in exactly the same way, a similar pattern can be observed. The chaotic mix of irritants and ‘good’ content that sucks you in, it bears a similar resemblance.

And though you still talk to your friends and family, the platform becomes the mediator of all these interactions. These websites are perfectly positioned to inject whatever they like into your feed. Because you have to be there anyway, you are given no choice but to look at it. Even if you’re not isolated directly from your social circle by it’s online counterpart, you are forced into your assigned category by being present on the platform.

So if these services are so bad for us, why don’t we stop using them?

Opting out is hard. Really hard. The presence of your social circle on these services forces you to be there too, and it extends way beyond what would usually be seen as social networks.

You have facebook, or twitter, or snapchat, or tiktok because your friends do, and it’s so much more convenient to keep up to date with them by looking at what they post rather than talk to them. And if you don’t, you’ll feel left out when everyone else is in on the most recent development and you aren’t. You have amazon because it’s so much more convenient than buying from individual stores online. You have netflix or disney+ or appletv because it’s so much more difficult to watch what you want to without them. You have the same online banking app as your friends because it makes transferring money so much easier. You have spotify because why pay for music when you have access to almost anything you’d ever want for free, or as close to free as to be a distinction without a difference. You have whatsapp because, well, how else will you talk to your friends?

But, there are ways around this. You can buy physical copies of media, you can shop local, you can close your social media profile, you can only use SMS or Signal (matrix/briar if you’re really cool), you can use cash everywhere. However if you try to do this you will immediately encounter a lot of friction. If you have ever tried to ask your friends to switch over to Signal you will be acutely aware of this. If you don’t have spotify, how will your friends request music at your next house party? If you don’t have every single streaming service on the market, how will you keep up to date with the most recently trending show? People you know will increasingly react with annoyance if you aren’t using the same services as other ‘normal’ people. And for this reason you are quietly, gently, but forcibly pushed back into the circle of influence of online platforms.

And finally, once you start using one of these services you are entrapped. Your data, your photos, your interactions, your playlists. Everything that you might value that is created on these platforms becomes stuck there. Yes, you can request your data as an export – but when you get it what will you do with it? How will you view it? And inevitably it won’t be complete as half of ‘your data’ might be photos of you on other peoples’ profiles. And if I want to see your family photos on these services, most of the time I will need a profile myself if I’m to view it with any kind of level of ease.

All these barriers artificially increase the cost of opting out of the system of manipulation.

A computer-generated dream world built to keep us under control

Hopefully I’ve started to allow you to see the bars of this digital prison, even if it goes to great lengths to convince you it doesn’t exist. You may still be unconvinced as to the intentionality of this – surely that’s just market capitalism? Services have to compete. And yes, this is how capitalism works – though ‘true’ market capitalism is apparently anti-monopoly… somehow. But in terms of intentionality, not every service connected to the internet necessarily even knows it’s part in the system. Platforms like facebook or google certainly do. But the independent web shop you bought from who uses an analytics service to understand how well they’re operating likely doesn’t.

The issue is the privacy policy you signed by using the site is more like an uncontract. A dubiously legal waiver that, in almost every case, assures you of your privacy whilst mentioning ‘trusted third parties’ they share your data with. (Dubiously legal as it it is legal, but the law has difficulty keeping up with tech.) This loophole is an infinite regress of data processors that in essence allows your anonymised usage data to be sold to the highest bidder. That data can be correlated with other bits of scooped up information and be deanonymised. Given access to every online activity, your geolocation, your movements, card transactions, viewing and listening habits, all the same data from your friends… It wouldn’t actually be that hard to identify you. You just need to be able to process vast oceans of data… Something that, say, a search engine provider would be uniquely talented at. facebook only needs to have 300 ‘likes’ from you to know you better than your spouse.

This is how the entire system of digital systems is able to trap you and mould your movements and the way you think. Little micro nudges, friction, inconvenience. But simultaneously being everything to you, all the time. Inserting an internet connection (a surveillance tool), between you and almost everything you care about. Then weaponising what would have been military grade identity profiling technology to push you to make as much money for them as possible.

It’s this implicit buy-in, the click-wrapped agreement to terms of use / privacy policy that has become so ubiquitous as to be invisible. It’s a flimsy excuse to create a binding legal agreement to legal documentation that averages out to about 250hrs reading time per user; granting them permission to use your information as they see fit. The users are presented an environment where they are made to feel like they have control – that it is their space to shape and build, but this isn’t the case. These aren’t digital town squares, but digital shopping malls. A for-profit company owns the platform you’re using, and you’re ultimately at it’s mercy. Even critical infrastructure feeds into this; the UK’s government website uses google analytics. The data created is shared with third parties that is prime for deanonymisation. Even your bank is likely selling your data.

This primes you for influence by an unknowable cabal of disparate forces, all driven to make the most of private information and extract as much value from you as possible.

The upshot is not a specific outcome, as mentioned – these companies only really care about their bottom line. It’s not intended that Meta aided the Myanmar genocide, but doing so generated the most profit. It’s not the goal of social media websites to promote the rise of fascism and white supremacy, but it makes money. Whilst it isn’t the only place the pattern is visible, it is specifically in the rise of the alt right and stochastic terror attacks that it is easiest to see. And in being so visible, it also, as implied earlier, most visibly mirrors the recruiting strategies of cults.

It builds a kind of self-reinforcing authoritarianism.

Stochastic Totalism

Stochastic (as in a result of random probability) terror is a kind of randomised violence where a group of people buy into and help spread a violent narrative. The message is spread multilaterally – at any level of media and at any level of intent. However the result is that the ambient levels of anger toward a specific group of people eventually bleed out into real life, often in deadly fashion. These groups will choose authority figures who may not even recognise themselves as such, but who are nonetheless held up as figureheads. It doesn’t matter if they’re right. It doesn’t matter if they’re even on the same side. It only matters that they pass the vibe check and embody a narrative. Ian Danskin has identified this behaviour as stochastic totalism – and I think it’s an incredibly important observation. But I don’t think that framing quite captures the entire picture.

We are all being pulled into our own version of this. Maybe not quite so violent. Maybe not quite so obviously vitriolic. But the centralising, categorising, oversimplificating nature of algorithms designed to maximise profit means we are all shunted into prefigured thought groups. We are all encouraged to find our own thought leaders and authority figures. And we’re given our own virtual neighbourhoods of people who agree likewise – ones we self-select because we are guided to do so.

And because these stochastically-generated affinity groups autonomously grow so big, they can be weaponised to brigade mainstream media. They become forces powerful enough to force politicians into capitulating and conciliatory roles. Government leaders become stewards for their preferred flavour of the status quo rather than having ideological visions and firm plans. It mirrors the capitalistic intent to siphon as much profit from as little material as possible. Never creating anything, just grinding existing assets down until there’s nothing left.

Depressing, isn’t it? But that’s the point.

Doomerism

Since the fall of the soviet union and the neoliberal decision that history is over, we’ve been living in an infinite now. Fukuyama claimed that representative democracy is in it’s final form. We did it, we solved government. And in the decades since 1991 we’ve seen exactly how stable and effective the ‘true form government is’.

This attitude, that we have reached a point where the tools with which we wield state power are perfect, lends to a focus on plugging holes instead of long term solutions. Market capitalism, likewise, is seen as a natural good. The only reason this would ever be the wrong is because of crony capitalism. It states that this is as good as it gets – a viewpoint that corporatism thrives on. Whether it’s true or not, the spectre of ‘if we only did capitalism good everything would be ok’ is an excuse that insists that humanity has finally worked it out, that all remaining problems are individualistic. By indoctrinating this mindset it allows capitalist interests to hide themselves as targets. It is in their interests that we don’t question the dominant neoliberal narrative.

Visions of a dystopian, corporatist future are partially popular because they don’t challenge us with change. Being resigned to our own inevitable demise means that we don’t have to worry about fixing it. We know there is no future, so we don’t have to worry about it.

We know that solving climate change is too big, that a few recycled or reusable plastic bags is a drop in the bucket. Yet we are told that it is all of our individual responsibilities to reduce our ‘carbon footprint’ (a term invented by the oil company BP). It’s too much responsibility to be held to – we know that, individually, we cannot do this. And we are kept overworked in order to reduce the amount of energy we have to even think about it. We’re all just trying to survive, day by day. And this scales upward – most corporations are just trying to cover their bottom lines and trying to ensure they have enough money to return to investors. Even the government will hold off on investing in solutions that will come to fruition in more than a few years for fear that they won’t be in office by the time the results of trying to repair or improve systems pay off.

This is relevant because with the advent of social media there is no way to escape the news cycle entirely – it gets pushed into users’ faces, often without warning and at unexpected times. News apps are built into phones to ensure you are shown the right thing at exactly the right time to bait you back into the doom cycle. Just enough. Online arguments exhaust people and push them away from politics, and this view of society encourages us to stay in our respective lanes.

Pursuit of profit at the expense of everything else has led to some of the most precarious working arrangements ever conceived, with the rise of the gig worker. And gig workers are now fully integrated into our society. Deliveroo, uber, better help, even a lot of AI work is actually done by precariously employed individuals with no contract and no job stability. The work of creating entertainment is also slowly being turned into gig work, with youtube and spotify putting pressures on musicians, documentary makers and video essayists to crank out material on a regular basis or face harsh penalties in their distribution. Even then the relationship to stochastic totalism means that if they fall afoul of the algorithm they may suddenly lose security anyway. The pressure to follow the curve of totalistic demands means that you have to appeal to the sect you’ve been assigned to – without even necessarily knowing who they are.

These pressures lead to a population that is burned out, exhausted, and disproportionately concerned with present conditions over long term plans. It’s survival. It’s a state of hopelessness that constrains our ability to collaborate – it is designed to disempower us.

At each level of scale it seems that the production demands of stochastic totalism (hand in hand with market capitalism) drives this bend to the middle that crushes creative thought and independent thinking. Sequels, remakes, spinoffs, the creative industry is being crushed under the weight of intellectual property franchises. Market dominance by monolithic rights holders, guaranteed sales returns that starve out drives to pursue original ideas. Major music labels now only really invest in guaranteed investments – not in developing potential.

It’s a pattern that bears concerning similarity to the behaviour of ‘stochastic leaders’, people like Jordan Peterson, Ben Shapiro, Russel Brand, Alex Jones. Donald Trump. Self selecting feedback loops that are based on nothing but profit maximisation and consolidation of power.

Capitalism is a death cult, and I personally don’t believe any level of regulation can ‘fix’ this phenomenon. To the point that it extols its own narrative; it is easier to imagine the end of the world than the death of capitalism. The entire mechanism is designed to make you feel disempowered and helpless. The ‘doomer’ mindset is its primary tool of self defence.

You can’t correct for hate

The problem with taking a reformist approach to issues like these is that you inevitably end up in a position equivalent to the paradox of tolerance. Free market capitalism and resisting monopolies run counter to one another, and monopolies are a natural result of trying to achieve market dominance. When selfishness is rewarded so handsomely by the design of a system, I can’t help but see this as poisoning the well in the same way that allowing the truly intolerant equal grounds of the tolerant does. The double movement theory places individualism and socialism on two sides of a scale. But when one of those forces (intense freemarket capitalism) is able to crush the majority of the population into poverty, whist the other is looking for equity, it is clear that it is incompatible with continued existence.

A similar approach is being taken to automated moderation systems, content selection algorithms, AI Chatbots, etc. A common view is that these programs are in their infancy and can be corrected and improved incrementally – a kind of AI reformism. Essentially the systems that make up the backbone of the digital (‘platform’) economy. Bad, negative, or unhelpful patterns can be weeded out. In some cases this can be true – if the problem is very simple, there is a good chance that, fairly reliably, you can use machine learning to solve a problem. Such things as efficiency calculations for walking systemsshortest paths, and other evolutionary algorithms have a fixed set of inputs and a output that can be easily assessed for success. Facial/gestural recognition has also been a place this has been used fairly widely, and even though this is still a fairly simple task in comparison to others made today these systems are routinely less capable of correctly identifying dark skinned faces than light skinned ones. In automated heath care assessment systems powered by AI, similarly, those from a more diverse background are routinely marked as at higher health risk than those from a white background. The same can be said of the automated benefit fraud prevention tool used by the UK government, that cut the benefits of some of the UK’s most vulnerable people because they fit a pattern the AI model saw.

The problem is that even those building these systems can’t actually explain how they work – they are built by pumping loads of data into a training system. The AI system then makes inferences after being shown huge amounts of examples and asked to perform tasks based on them. But because the system doesn’t ‘see’ or ‘think’ like a person does, there is no telling exactly what it picks up from the data given. But the data they use to train these tools comes from a world that is saturated with things like structural racism, sexism, any kind of bias you could name. They aren’t just in the data – they are the data. As a result, data from an imperfect society will train an AI to do things imperfectly. But this becomes a problem when these systems are presented as science. Presenting these systems as ‘using science’ gives them a veneer of a ‘view from nowhere‘. If it’s science, then it must be logical, unbiased, cold and calculating. The screen of science is used to protect a system that does nothing more than amplify existing biases in society.

Just like with the double movement of capitalism, it seems as if the thinking is faulty. You can’t train neutrality out of bias. The systems being made to automate your work, your healthcare, your policing, your newsfeed, they are all predicated on nothing more than making money. And not only that but they are doing it by stratifying society into sectors suggested to it by the pre-existing stratification of our world. A real solution needs you to unplug. A real solution to these problems requires people to step out of ‘the matrix’, to have diverse conversations between peers and affinity groups. To break out of this system requires a voluntary step back from this kind of technology and towards one another.

Outside the Asylum

One image that keeps coming back to me when I discuss this stuff is one of the ‘Outside of the Asylum’ in Douglas Adams’ book So Long and Thanks for All the Fish. In it, a character called Wonko the Sane builds an inverted asylum. It houses the entire universe ‘inside’ the asylum and reserves a small square patch of garden ‘outside’. This is to get away from the madness of a world that feels like it needs detailed instructions on the use of toothpicks.

Explaining all that I have just explained to you here feels exactly like that. It’s absolutely crazy. It’s massive. It almost feels entirely inevitable.

But I can assure you that it’s not.

Unfortunately unplugging from this ‘matrix’ isn’t going to be as easy as taking a red pill. Ultimately it will be different for every person. And the way these systems work is, as mentioned, specifically designed to be hard to resist. Just like preventing climate change there is no one action we can do to stop it individually. (Although there are probably some individuals who could but absolutely won’t). Instead what it requires is understanding, consciousness, and consistent effort. If you can easily stop using a social network, do so. Or find one a little less rubbish. Try to minimise the amount of app mediated services that run your life. Start using alternative youtube frontends and alternative clients. Pay with cash or your bank card, rather than your phone. Use a privacyrespecting phone, divest from FAANG service providers. Physically meet up with other people in real spaces. Get involved in your communities. Dan McQuillan suggests People’s Councils – specifically focussed on AI but also generally worth embracing. Specialised niche interest groups of people affected by a specific issue. Learn how to run your own tech solutions or find people interested in helping you do so. The only reason these platforms have so much power is because of ease of use. If we could all accept a little more friction we might stand a chance of collectively unplugging.

And once we do that, we may find ourselves in a world where we once again feel more truly free than before.


A lot of research went into this article, but I’d like to specifically reference the work of Dan McQuillanBen TarnoffShoshana Zuboff, and Ian Danskin as primary sources.

Categories
Uncategorized

AI, IP theft, and the death of creativity.

Over the past year or so, the prevalence of machine learning and AI-generated material has reached a new level of fervour. ChatGPT, in particular, has triggered a chorus of articles, think pieces and general handwringing over the future of humanity. An algorithm could be coming for your job next, be that copywriting, art, or even software development. It seems that these services have the uncanny ability to produce almost anything you could ask for. An infinite conversation between Werner Herzog and Slavoj Zizek. The cover of Cosmopolitan magazine. A winning social media profile that shoots to instant fame. A credible excuse for Elon Musk not to fire you from Twitter (honestly get out whilst you still can though).

I would argue that the all-powerful creative might of AI has been widely overstated.

You see, these algorithms are incredibly talented at taking pre-existing concepts and putting them together in a convincing way. My claim is that the outputs of these systems contain no genuine creativity. This is possibly a controversial claim, and will require a little exploration and philosophy.

Computer make art

As with most areas of philosophy the definition of creativity is hotly contested. Though there is an emerging consensus that for something to be creative it needs to satisfy two conditions. Firstly it must be ‘novel’ – meaning a new combination of elements, original in some form. Secondly it must be ‘valuable’, though value in this context could be replaced with ‘exemplary’ or ‘notable’. It must have something to make that originality of wider interest – though this has been contested. Whilst there have been studies on AI and creativity, the limiting factor in current research is in-domain knowledge (to allow researchers to assess creativity within a medium), and valuation of results (the ability to judge the creative worth of the output). The complexity of the task of evaluation cannot be stressed enough – and alongside definitions of creativity, these tend to rely on factors like emotional response, motivations, lived experiences, shifting tastes, values and more. This is a running theme for researchers of creativity, alongside aspects of self expression, expression of of ideas, and a blending of them between the conscious and unconscious mind.

What I’m getting at is that theory of creativity tends to be connected to theory of mind and theory of consciousness. I propose that creativity has a direct causal relationship with consciousness. It is my personal belief that to be able to create in the way we understand true creation, one has to have a conscious mind.

This should bring some context when I say that what these systems do is fit Lego bricks together in a way they understand you’d like them to be assembled but they don’t create the bricks. These are concepts they are trained to identify through processing a vast amount of data. Data that has been acquired by various and often dubious means. There have already been controversies around where training data for MLaaS (Machine Learning as a Service) platforms originate. Some include GitHub, Deviant Art, stock photography websites, even individual artists. There have been attempts to justify the use of copyrighted data in training AI algorithms, and a large class action lawsuit is currently being fought over this subject. One side – myself counted amongst them – declares this phenomenon essentially ‘fancy stealing’. The other argues that it should count as fair use. The jury is currently still out, and the use of data without consent for research projects that eventually spawn businesses has been dubbed “AI Data Laundering”.

The legal definition of copyright and the role of IP theft is a subject that deserves it’s own deep dive, but there are important considerations outside of this. We have arguably established the dubious source of the raw material used to create MLaaS content and ruled out the idea of an algorithm being able to express true creativity. Let’s take a look at how that relates to the creative industry.

The Money Machine

It is a well-known truism that copyright protects creators asymmetrically. A small artist will have a much harder time claiming copyright infringement than a large, well-funded organisation. Not to mention the dominant way creatives achieve success and renown is on big centralised social media platforms who have draconian, easily abused and unaccountable copyright enforcement processes.

This leads to a situation where creators are forced to use platforms that not only severely curtail their ability to riff on existing works (something that is necessary for creativity because no idea exists in a vacuum), but actively mine their data. On these platforms they are forced to produce work that specifically appeals to algorithms that govern their feeds. That work is then recycled into yet another algorithm – e.g. GPT3.5 or stable diffusion – that is designed to take paid commissions away from the original creator.

Because of the demands of capitalism for work to make a profit, any big budget creative project is demanded a return on investment, often at the expense of creative merit. One great example of this is the decline of memorable scores in the Marvel Cinematic Universe. This is commonly attributed to using licensed music for the edit and then writing something similar – but derivative – for use in the final film. An existing piece of music is used to edit the film before an original score is composed; lining up shots and cuts with the chosen ‘temp’ score. What commonly happens is that by the time the composer has the chance to write and record the final soundtrack, the edit is so inflexibly tied to the temp score that their hands are tied and the end result is a non-copyright-infringing duplicate.

As productions gain larger price tags the margin of error for risk is severely limited, providing less space for innovation. Whilst the originality of a piece by no means has a direct relationship to its financial success, a higher price tag and consequently more pressure for a guaranteed hit makes a safe bet a lot more enticing to investors than a gamble. Simultaneously, social media algorithms are designed to maximise user engagement, keeping people on the platform as long as possible. Specific weight is given to posts that are monetisable – selling attention to the highest bidder. This also tends to mean that these financially-boosted posts need a functional business model behind them. Creators have to bend to these demands accordingly.

The upshot is that, for exposure at any level, there is heavy structural incentive for a repeatable, guaranteed sale. It doesn’t mean that true creativity is impossible inside this system, but it can severely curtail originality for the sake of traction. A Faustian pact that isn’t exactly new, but has been reborn in an industrialised, automated fashion. One that is self reinforcing and cajoling. Nudging the creator’s hand in a quiet but mercilessly-insistent manner. And each concession to these forces makes the next easier, and the creator more perfectly honed for algorithmically-optimised creative success. A safe kind of success that will make the investors happy.

But do you know what else is algorithmically trained for success based on a varied diet of cultural references? One that is actively honed to produce exactly what the commissioner has asked for, based entirely on pre-existing work? One that is guaranteed to be ‘original’ and non-licence infringing? That can solely make derivative work that scratches an itch but contains no threat of original thought or expression?

AI generative platforms.

They are uniquely suited to exploiting the profit incentive in creative work.

The death of creativity?

Evolutionary psychologists have theorised that the fundamentally distinguishing feature of humanity is imagination – a big part of creativity. Not just the ability to communicate, but to theorise, imagine the future, or even the absurdly impossible. I am not a psychologist, evolutionary or otherwise, and am unable to substantiate my supposition that imagination is an innate property of consciousness. However, I believe ability to create is synonymous with the existence of the mind – and thus until we have created artificial consciousness and agency we can’t have artificial creativity. Taking this into account, it guarantees anything that emerges from these algorithms to be an amalgam of previous works that is 100% derivative. Unoriginal, in my opinion. The reason I draw this distinction is because I think it underlines something fundamental about the nature of art.

So far I have painted a pretty bleak picture for the role of the artist in society, but I also believe that this is only one possible future. I don’t believe that AI is, by itself, a threat to art and creativity, and I say this because it isn’t the first time we’ve seen this pattern in the field. The invention of the printing press, the camera, digital painting and image composition, procedurally generative art, all of these fundamentally changed how we look at art. But none of them destroyed it, they only changed our attitude and approach. The camera and photography didn’t destroy painting, or even portraiture, it only refocussed the painter on their expression, rather than mechanical reproduction. In the same way, I predict these services will teach us the value of true creativity. If we’re willing to collectively learn that, of course.

The issue is, as has been argued many times before, the profit motive crushes creative thought. A safe production that appeals to the market actively resists exploration and innovation, which I think is what is at the heart of the matter here. The problem isn’t controversial subject matter, or changing your expression to suit your audience. These are limitations that are often suggested as the yoke that capitalism puts on creativity. And whilst these conditions can be much maligned, depending on the situation, setting clear boundaries for a project can help inform creativity. The true limitation it applies is a need for safety, rather than saying something new. And why do people consume creative works? Arguably, it is because they want to experience something new, because they want to be inspired. Because they want to experience something that has been created. Otherwise, why make anything at all?

I don’t know how we continue a capitalistic model in the creative industry whilst protecting innovation. If we don’t want to deal with another 30 Avengers movies, another 50 Star Wars spinoffs, until every safe permutation of our favourite action figures smashing into one another has been exhausted, we must decouple the act of creation from the profit motive. Or we can lie back and let algorithmic content, created by algorithms, to feed algorithms and train people to act like machines.

Platform hypocrisy

This dystopian future of a creatively bereft, machine-automated public imagination is only possible because a small number of very powerful companies have a massive database. One that has been scraped wholesale off the internet. Regardless of copyright status, and justified under the excuse of ‘academic research’, commercial services are being built off the back of this research. Whilst the legal ramifications are a subject for another time, I find it difficult to see this as either an acceptable or ethical situation. I would argue that if creativity is conditional on a consciousness (not necessarily a fact, but a fairly well-evidenced philosophical claim), the output of these MLaaS platforms trained on copyrighted data is just stealing with extra steps.

Whether legal or not it’s difficult to see it as anything but theft from those in positions of precarity, by those with incredible amounts of leverage. Especially when so many of these services have relationships to platforms that tyrannically enforce the DMCA (or at least own platforms that do). And failing that, steal content from platforms with those rules.

When it comes to the ethics of using AI in general, I honestly think that as a creative it could be quite useful to have a personal machine learning AI assistant, trained on information you give it to help your process. The issue here isn’t the technology, but its weaponised use to oppress the people who create the art and innovation that we build our daily lives around. Similarly, my issue with the copyright infringement here is not that copyright is being broken – I happen to think copyright is a severely flawed system. My issue is the asymmetry in power allows for the crushing of creative thought for the sake of making a quick buck.

Search engines are rapidly rolling their own ChatGPT services to jump on the trend, with even some new search providers being the two services rolled into one. It seems evident that platforms are trying to roll out ML services as widely and rapidly as possible, normalising them in the market before any kind of resistance can be built. I think part of this is to astroturf AI data laundering, to make it acceptable to get away with. To make this kind of stealing ok, because the sheer volume and untraceability makes it difficult to distinguish.

It’s essentially ok to steal, as long as long as you make sure to steal everything when you do.

UPDATE 09/02/2023:

I reached out directly to the ChatGPT algorithm for comment about the phenomenon of AI Data Laundering and MLaaS platforms and got the following response

“I think there is definitely substance to the allegations of ‘AI data laundering’ and the use of copyrighted data in commercial services. Copyright infringement is a serious issue, and it is important that companies and institutions take the necessary steps to ensure that they are not using copyrighted data without permission. It is also important to ensure that the data used to train AI systems is not only accurate, but also ethically sourced. If companies are using copyrighted data without permission, then it is not only a violation of copyright law, but it also undermines the trust of the public in the technology. It is important that companies and institutions take the necessary steps to ensure that they are not using copyrighted data without permission, and that they are using ethically sourced data to train their AI systems.”