We can confidently say we are living in the age of the algorithm.

Some of the most influential organisations of our day use secret algorithms that steer us towards what we should read and watch, recommend restaurants and holiday destinations, as well as provide relationship guidance. But what about how we vote?

The innovation of algorithms means even our political leanings are being analysed and potentially also manipulated.

Cambridge Analytica is an artificial intelligence data mining organisation that reportedly helped Donald Trump to the White House and assisted the Brexit campaign (though the company denies that).

It may sound extreme, but the organisation has been described as establishing “weaponised artificial intelligence” to manipulate opinions and behaviour with the purpose of advancing specific political agendas.

In fact, Jonathan Albright, from Elon University, told The Guardian we were seeing the emergence of “… a propaganda machine … targeting people individually to recruit them … capturing people and then keeping them on an emotional leash and never letting them go”.

In some ways, this all seems very familiar. It is not far off Frank Underwood’s presidential bid in Season 4 of House of Cards, with big data and algorithms used to determine people’s political preferences.

But just how powerful is this algorithm?

Originally developed by a Cambridge psychology professor, the algorithm works by correlating an individual’s Facebook Likes with their OCEAN scores to identify their gender, sexuality, political beliefs, personality traits and even political leanings. (OCEAN refers to someone’s big five personality traits: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism and is a standard-bearing personality questionnaire used by psychologists.)

Could a robot do your job?

Search our database of every Australian occupation to find out how difficult it will be for artificial intelligence to do your job.
According to Das Magazine, the algorithm showed incredible accuracy when tested. By analysing just 10 Facebook Likes, it could evaluate a person’s character better than an average co-worker. It escalates from there: with 70 likes, better than friends; with 150 likes, better than their parents; and, with 300 likes, better than their partner.

As a cultural researcher, I’m interested in the way we engage with our citizenship. Such algorithms concern me as I do not think our democracy is prepared for these innovations.

According to a Scout report, our data can be harvested to not only predict our behaviour, but ultimately even modify it. That has to have an impact on our democracy.

As Tamsin Shaw from New York University notes:

“The capacity for this science to be used to manipulate emotions is very well established. This is military-funded technology that has been harnessed by a global plutocracy and is being used to sway elections in ways that people can’t even see [and] don’t even realise is happening to them.”

So, what can be done with this information?

Think about the influence of fake news during the last US presidential election.

While many voters were staunchly anti-Trump, information sent to them was never pro-Republican. Rather, news articles, both real and fake, were sent to anti-Trump supporters that fostered doubt about the integrity of Hillary Clinton — therefore increasing the odds that they would stay away from the polling booths. They may never vote for Trump but they became less likely to vote for Clinton.

More so, this is adaptive: if they drive something at you and you ignore it, something else is delivered to you until you take the (click) bait. It then knows your triggers and continues to deliver more of the same.

In a democracy, we assume that we understand where the information comes from, but this is no longer the case.

The sources we have come to rely on for gathering information are themselves reliant on algorithms that can be gamed. This became clear when Google’s search engine was manipulated by extreme right-wing groups working with conspiracy theorists to deny the Holocaust ever happened.

It meant that, until Google fixed the problem, the highest rated sites to pop up after a search on Nazi death camps were those denying their existence. The effects could be profound. As the writer Carole Cadwalladr notes: “Google is not ‘just’ a platform. It frames, shapes and distorts how we see the world.”

If we combine the manipulation of the news we receive with the gaming of the world’s most powerful search engines, suddenly our democratic free-will seems vulnerable.

Such developments are now impacting our everyday lives and we may not even realise. An increasing number of organisations — be they government or private businesses — are now relying on artificial intelligence or machine-learning systems to make decisions.

This is an exciting development. New learning systems mean that educational organisations can identify at-risk students and suggest additional study resources.

Likewise, we can use data to identify health risks of specific populations. Both these applications have incredible potential to assist some of the most vulnerable populations.

But we should also be aware of the downside. Increased reliance on artificial intelligence means that the human element is diminished in decision-making.

While the aim is to limit bias, if the algorithm learns from systems that are themselves biased, then we perpetuate discrimination.

In the United States, a risk-scoring system used to assist in the sentencing of criminals actually discriminated against the black population, according to an investigation by ProPublica. China will reportedly soon be using an algorithm to assign each person a “citizen score”.

This will determine the conditions by which they can get a loan, the type of work they can do, and even their ability to travel. The machine decides. Who can argue with that?

Will our democracy survive?

Just how serious is this? These issues have meant that some of the world’s leading universities have now developed programs on both the ethics and safety in AI. My own, Western Sydney University, is developing a data ethics project.

As noted, one of the things that drive our democracy is knowing where the information comes from. But when that information appears on Google or in our Facebook feed, the source is never identified as propaganda — it seems to be free and independent and, as such, we are susceptible.

A recent report by the London School of Economics investigating the impacts of AI on democracy noted that UK’s electoral laws were “weak and helpless” in the face of new forms of digital campaigning.

The laws that have always underpinned Britain’s electoral system could not keep up and needed “urgent review by parliament”.

According to Professor David Miller at Bath University, “it should be clear to voters where information is coming from, and if it’s not transparent or open where it’s coming from, it raises the question of whether we are actually living in a democracy or not.”

There is no doubt that the bringing together of artificial intelligence and big data has the potential to be incredibly positive.

It can assist in our decision-making as well as provide us with insights into our behaviour to help understand who we are.

But like any technological advancement, it is a double-edged sword, and we have to decide just how exactly to deal with it.

Note: This was originally published by ABC News: If Google and Facebook rely on opaque algorithms, what does that mean for democracy?