Top 10 technology and ethics stories of 2023
As with 2022, Computer Weekly technology and ethics coverage continued with a focus on working conditions in the tech sector, looking, for example, at the experience of UK Apple workers attempting to unionise and the efforts of digital gig workers who train and maintain today’s much-hyped artificial intelligence (AI) systems.
AI itself also took centre stage in Computer Weekly’s technology and ethics coverage, reflecting the proliferation of the tech globally since the end of 2022 when generative AI models came to the fore. This included stories about algorithmic auditing, the ethics of AI’s military applications, and its potentially unlawful deployment in Greek refugee camps.
Interviews conducted throughout the year with AI critics were also a big focus, with the idea being to elucidate perspectives that might otherwise get drowned out in the constant upswing of the AI hype cycle.
Computer Weekly also covered calls by developers small and large to break the app store monopolies of Apple and Google, plans by the UK government to surveil the bank accounts of benefit claimants, and developments related to various workplace monitoring technologies.
Under the guise of the Coalition for App Fairness (CAF), small developers and established global companies alike have been calling for urgent regulatory action to make app stores competitive and break the monopolies of Apple and Google.
Set up in September 2020 to challenge the “monopolistic control” of tech giants over the app ecosystem, CAF members spoke to Computer Weekly about their claims of unfair treatment at the hands of Apple and Google.
This includes the levying of an “app tax”, opaque review processes that are compounded by a complete lack of communication and unclear rules, and restrictive terms and conditions that prevent developers from engaging directly with their own customers.
Algorithmic auditing firm Eticas spoke to Computer Weekly about its approach to “adversarial” audits, which is the practice of evaluating algorithms or AI systems that have little potential for transparent oversight, or are otherwise “out-of-reach” in some way.
While Eticas is usually an advocate for internal socio-technical auditing – whereby organisations conduct their own end-to-end audits that consider both the social and technical aspects to fully understand the impacts of a given system – Eticas researchers said developers themselves are often not willing to carry out such audits, as there are currently no requirements to do so.
“Adversarial algorithmic auditing fills this gap and allows us to achieve some level of AI transparency and accountability that is not normally attainable in those systems,” said adversarial audits researcher Iliyana Nalbantova.
“The focus is very much on uncovering harm. That can be harm to society as a whole, or harm to a specific community, but the idea with our approach is to empower those communities [negatively impacted by algorithms] to uncover those harmful effects and find ways to mitigate them.”
In 2023, Computer Weekly coverage also focused in on the human labour that underpins AI.
In August, for example, Turkopticon lead organiser Krystal Kauffman spoke to Computer Weekly about how globally dispersed digital gig workers at Amazon’s Mechanical Turk are in the process of coordinating collective responses to common workplace challenges.
In October, eight US politicians wrote to nine leading American tech companies demanding answers about the working conditions of their data workers, who are responsible for the training, moderation and labelling tasks that keep their AI products running.
The letter – addressed to Google, OpenAI, Anthropic, Meta, Microsoft, Amazon, Inflection AI, Scale AI, and IBM – calls on the companies to “not build AI on the backs of exploited workers” and outlines how data workers are often subject to low wages with no benefits, constant surveillance, arbitrary mass rejections and wage theft, and working conditions that contribute to psychological distress.
The companies’ alleged failure to “adequately answer” the questions later prompted 24 American unions and civil society groups write to Senate majority leader Chuck Schumer about the poor working conditions of data workers, and how they are negatively affected by new technologies.
UK Apple Store workers organising for better working conditions have said the company is actively trying to prevent staff from exercising their right to join a union.
In February 2023, Apple’s Glasgow store became the first of its 40 UK-based locations to unionise, after workers gained formal union recognition from the company, while staff at its White City and Southampton stores are currently in the process of seeking “voluntary recognition” so that workers at each location can act as single bargaining units.
Speaking with Computer Weekly, workers and unions reps claimed Apple has been using a variety of “union-busting” tactics to discourage them from organising, including allegations of prohibiting employees from even discussing unions at work; holding anti-union “downloads” (Apple-speak for team meetings) and one-to-ones with managers; and implying workers will lose out on workplace benefits as a result of unionising.
They also about the role of surveillance and automation in Apple stores, claiming these practices support the company’s approach to disciplinaries, which they say is allowing the company to cut back on staff without the negative press other tech firms received following their large-scale layoffs.
They claimed this has been a particular issue for disabled or neurodivergent workers, as Apple’s workplace monitoring systems do not adequately account for their circumstances.
In its Autumn Statement, the UK government confirmed its plans to monitor the bank accounts of benefit claimants, claiming the measure will improve the detection of fraud in the welfare system.
According to the Autumn Statement: “The government will … take further action on fraud and error by legislating to increase the [Department for Work and Pensions] DWP’s access to data on benefit claimants that is held by third parties (e.g. banks).
“This will enable DWP to better identify fraud in the welfare system, especially in detecting fraudulent claims where there is undeclared capital, which is the second highest type of welfare fraud. These extra powers are estimated to generate around £300m per year savings by 2028-29.”
It has since been confirmed that these new powers to monitor bank accounts are included in an updated version of the government’s forthcoming Data Protection and Digital Information Bill.
Speaking in the House of Lords on 8 November, Jenny Jones described the plans to “spy on the bank accounts of those receiving benefits” as a “new low in this government’s constant vile behaviour”.
“Never in our history have a government intruded on the privacy of anyone’s bank account without very good reason. Now we are treating all people on benefits as potential criminals. If MPs think this is a good idea, let us ask them to go first,” she said. “With all the cases of corruption, second jobs and undeclared incomes, would MPs be okay if the banks had the ability to raise red flags on their accounts? That seems to make sense – to test the system before we use it on other people.”
In March, Elke Schwarz, an associate professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, spoke to Computer Weekly about the ethics of military AI, arguing that many of the ethical arguments put forward to justify deploying the tech in military settings do not hold up to scrutiny.
Charting the contours of the discourse around military AI, Schwarz said the idea of “ethical weapons” is relatively recent (gaining serious traction after the Obama administration started heavily using drones to conduct remote strikes in Iraq and Afghanistan), and challenged prevalent ideas that the efficiency or precision of weapon systems makes them more moral.
She said AI-powered warfare also risks further dehumanisation in times of conflict, as it reduces human beings to data points and completely flattens out any nuance or complexity while increasing risk for those on the receiving end of lethal violence.
Schwarz also detailed how military AI is being shaped in the image of Silicon Valley, and how the convenience of using AI weapons in particular can lower the threshold of resorting to force.
In November, a United Nations (UN) body approved a draft resolution on the negative impacts of AI-powered weapons systems, saying there is an “urgent need” for international action to address the challenges and concerns they present.
Spearheaded by Austria, the resolution noted that its sponsors are “mindful of the serious challenges and concerns that new technological applications [represent] in the military domain”, and that they are “concerned about the possible negative consequences … [of LAWS] on global security and regional and international stability, including the risk of an emerging arms race, lowering the threshold for conflict and proliferation”.
The draft resolution now passed specifically requests that UN secretary-general António Guterres seeks the views of member states on LAWS, as well as their views on “ways to address the related challenges and concerns they raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force”.
These views should be reflected in a “substantive report” that reflects the full range of perspectives given, so that member states can use it as a discussion point in the next session of the General Assembly – the UN’s main deliberative and policy making body.
Throughout 2023, there were a number of developments related to the use of various digital technologies by employers to monitor their workers activity and productivity.
In August, the Culture, Media and Sport (CMS) Committee’s Connect tech: smart or sinister report concluded that the monitoring of employees via connected technologies “should only be done in consultation with, and with the consent of, those being monitored”, adding that the UK government should commission research to improve the evidence base around the deployment of automated data collection systems at work.
This was followed in October by the ICO’s publication of workplace monitoring guidance, which warned that any surveillance being conducted at work must respect staff’s right to privacy.
The guidance outlined steps employers must take when conducting workplace monitoring, including making employees aware of the nature, extent and reasons for the monitoring; having a clearly defined purpose and using the least intrusive means possible; retaining only the relevant personal information to that purpose; and making all information collected about employees available through subject access requests (SARs).
A survey conducted by Prospect union in June found that UK workers are “deeply uncomfortable” with digital surveillance and automated decision-making in the workplace.
In October, Computer Weekly published a story by reporter Lydia Emmanouilidou about two AI-powered surveillance systems (dubbed Centaur and Hyperion) that have been deployed in Greek refugee camps, and which are currently being investigated by the country’s data protection watchdog.
Although the data watchdog’s decision remains to be seen, a review of dozens of documents obtained through public access to documents requests, on-the-ground reporting from the islands where the systems have been deployed, as well as interviews with Greek officials, camp staff and asylum seekers, suggest the Greek authorities likely sidestepped or botched crucial procedural requirements under the European Union’s (EU) privacy and human rights law during a mad rush to procure and deploy the systems.
The Greek Data Protection Authority’s decision could determine how AI and biometric systems are used within the migration management context in Greece and beyond.
In conversation with Computer Weekly, critical computing expert Dan McQuillan spoke about the imposition of AI on society, with particular focus on AI as an “emerging technology of control that might end up being deployed” by fascist or authoritarian regimes.
McQuillan argued that AI’s imposition from above is a reflection of the social matrix it sits within, and that within this context, there can be no change in how the technology is developed and deployed without widespread, prefigurative social transformation.
He also highlighted the historical continuities and connections between fascism and liberalism, and the weakness of liberal democracies in defending against turns to authoritarianism, which in turn places AI at similar risk due to its socio-technical nature.
“Whatever prefigurative social-technical arrangements we come up with must be explicitly anti-fascist, in the sense that they are explicitly trying to immunise social relations against the ever-present risk of things moving in that direction … not necessarily just the explicit opposition to fascism when it comes, because by then it’s far too late.”