Guest Post: Tom Pettinger, PhD Student, Examines Artificial Intelligence – Is It the Saviour of Humanity, Or Its Destroyer?


An artificial intelligence image.Please support my work as a reader-funded investigative journalist and commentator.


The following article is one of my few forays into topics that are not related to Guantánamo, British politics, my photos or the music of my band The Four Fathers, but I hope it’s of interest. It’s an overview of the current situation regarding artificial intelligence (AI), written by Tom Pettinger, a PhD student at the University of Warwick, researching terrorism and de-radicalisation. Tom can be contacted here.

Tom and I first started the conversation that led to him writing this article back in May, when he posted comments in response to one of my articles in the run-up to last month’s General Election. After a discussion about our fears regarding populist leaders with dangerous right-wing agendas, Tom expressed his belief that other factors also threaten the future of our current civilisation — as he put it, “AI in particular, disease, global economic meltdown far worse than ’08, war, [and] climate change.”

I replied that my wife had “just returned from visiting her 90-year old parents, who now have Alexa, and are delighted by their brainy servant, but honestly, I just imagine the AI taking over eventually and doing away with the inferior humans.”

Tom replied that it seems that AI “could pose a fairly short-term existential risk to humanity if we don’t deal with it properly,” adding that the inventor and businessman Elon Musk “is really interesting on this topic.”

I was only dimly aware of Musk, the co-founder of Tesla, the electric car manufacturer, so I looked him up, and found an interesting Vanity Fair article from March this year, Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse.

That article, by Maureen Dowd, began:

It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.

They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if AI goes rogue and turns on humanity.

Tom found his own particularly relevant quote, about Mark Zuckerberg, Facebook’s founder, who “compared AI jitters to early fears about airplanes, noting, ‘We didn’t rush to put rules in place about how airplanes should work before we figured out how they’d fly in the first place.’” As Tom explained, AI enthusiasts like Zuckerberg “don’t recognize that at the point it’s too late, we can’t do anything about it because they’re self-learning, and it’s totally driven by the private (i.e. profit-inspired) sector, which has no motivation to consider future regulation, morality or even our existence.”

He added, “I think Musk’s one of the smartest guys on the planet: he wants to tackle climate change, so he starts Tesla and SolarCity; he wants to ensure humans have an ‘out’ against Earth catastrophes, so he develops SpaceX; he wants to ensure the best chance of a good future regarding AI, so he develops OpenAI and Neuralink. His thoughts (and many other experts/ thinkers) on AI come down to: we either advance like never before as a species, or likely become extinct, with no middle ground. And the consensus within the AI field seems to suggest within 30-70 years this change will come about.”

Tom then sent two links (here and here) to two summaries of the debate about AI, which he described as “hugely informative”, adding, “The first link is a basic introduction to the road from narrow AI (what we have today) to general AI (human-level intelligence) and superintelligence (super-human AI). The second one is aimed more at those interested in social science, exploring the potential consequences.” He also stated, “both are definitely worth a read. If you wanted a summary of them though I’d be more than willing to oblige. I love writing, and this subject!”

I replied asking Tom to go ahead with a summary, and his great analysis of the pros and cons of AI is posted below. I hope you find it informative, and will share it if you find it useful.

Artificial Intelligence: Humanity’s End?
By Tom Pettinger, July 2017

“Existential risk requires a proactive approach. The reactive approach — to observe what happens, limit damages, and then implement improved mechanisms to reduce the probability of a repeat occurrence — does not work when there is no opportunity to learn from failure.” AI expert Nick Bostrom

Does artificial intelligence spell the end for all of us? This post looks at how AI is developing and the potential consequences for humanity of this impending technological explosion. Depending on where the technology takes us, our species could experience the greatest advancement in its history, the worst inequality ever seen, or even push us to a point of extinction. I argue that within this century, we’ll likely be seeing human-level and super-intelligent AI and that we should be considering the consequences now rather than waiting until the unknown consequences arrive. Just to make it clear from the start, when we’re talking about AI, think algorithms rather than robots. (The ‘robots’ merely perform the physical function the algorithms behind them tells them to.) So think less Terminator, and more Transcendence. However it turns out, one thing’s for certain — it’s not long before our very existence is transformed forever.

Phase 1: Artificial Narrow Intelligence (ANI) – specific-functional intelligence

Artificial Narrow Intelligence is already all around us. ANI is essentially algorithms that serve a specific and pre-programmed purpose, allowing humans to function more effectively and enjoyably, supporting our development. It’s in our smartphones, self-driving cars, computer gaming — all of these are examples of single-purpose, ‘narrow’ intelligence. The algorithms cannot change their roles, which are strictly defined by their programmers, and they have no ability to decide what their tasks are outside of human control. We’ve become ever-more dependent upon ANI, to a point where it’s hard to think of an institution or sector that is not driven by ANI; financial institutions, education, public transport, energy, and trade are all sectors run by computing intelligence that would collapse if we removed it. ANI is getting smarter all the time, self-learning within their specified roles; AlphaGo, the program that famously beat Go professional Lee Sedol, developed its tactics by playing against itself, millions of times. However, it cannot perform other tasks, like setting the temperature, displaying a website, or changing traffic lights; this program will remain narrow intelligence and consigned to playing Go.

Phase 2: Artificial General Intelligence (AGI) – human-level intelligence

Artificial General Intelligence, however, is the point where the algorithms have reached human-level intelligence and can essentially pass the Turing Test, where you can’t tell if you’re speaking with a human or a machine (other definitions of AGI are explored here). AGI is ultimately the ability to practice abstract thinking, reasoning and the art of self-tasking. So where AlphaGo (ANI) can improve its Go playing, AGI would be able to master any game it decides to, as well as look at ways of reducing traffic, analyse stock markets and write up reports on war crimes, all at the same time. The main difference between human and algorithmic ability to ‘compute’ information in this phase would largely be speed. AGI-level algorithms would have similar intuition, analysis and problem-solving capabilities as we possess. But already, smartphones can now perform instructions hundreds of millions of times faster than the best computers that first took humans to the moon, and in the future, this will only have increased dramatically.

Robots (essentially shells containing clever algorithms) could also cook dinner, engage in conversation or debate with you, and take your kids to school. Moves towards AGI are underway; computer programs are now learning a range of video games based on their own observations and practice, like a human learns, rather than just being focussed on becoming proficient at one single game. Robots are beginning to hold basic conversations and are interacting with dementia patients. However, we are still a long way off AGI; self-learning a multitude of games or engaging in primitive interactions is nowhere near passing the Turing Test. Having said this, Moore’s Law, which suggests that computing power doubles every two years, has been upheld for the last 50 years and so although some slowdown will probably occur, the shift from ANI to AGI looks set to occur within 25 years. The median date predicted by experts for AGI has been around 2040 in several different studies. Reaching this level is a milestone as it will pave the way for the next phase in our journey towards superintelligence.

Phase 3: Artificial Super Intelligence (ASI) – super-human intelligence

Reaching Artificial Super Intelligence is the point at which artificial intelligence will have become superior to the intelligence of humanity. Here, the ‘Singularity’ is said to have occurred; this term denotes a period where civilization will experience disruption on an unprecedented scale, based on the AI advancements. Essentially everybody in the field accepts that we’re on the brink of this new existence, where AI severely and permanently alters what we currently know and the way in which we live.

In terms of how intelligent ASI could become, it can be useful to think of different animal species. From an ant, to a chimpanzee, to a human, there are distinctive differences in terms of comprehension . An ant can’t comprehend the social life of a chimpanzee nor their ability to learn sign language, for example. And a chimpanzee couldn’t understand how humans fly planes or build bridges. As ASI develops, it will begin to surpass our own limits of comprehension, like the difference between ants and chimps, or chimps and us. The principal difference between AGI and ASI is the intelligence quality. ASI will be more knowledgeable, more creative, more socially competent, and more intuitive than all of humanity. As time progresses and this superior intellect improves itself, the disparity between ASI and human knowledge will only increase. In the ant-chimpanzee-human scale, ASI may be able to progress to hundreds of steps of comprehension above us, in a relatively short amount of time. We can’t really conceive how ASI would think, because its level of understanding would be greater than ours – just as chimps can’t get on our level and comprehend how we fly planes.

There is only very limited debate about whether or not reaching ASI will happen; the debate focusses more on when it will happen. Although some, like Nick Bostrom, warn that this ‘intelligence explosion ’ from AGI to ASI will likely “swoop past”, most predict that the transition from AGI to ASI will take place within 30 years. We can think of the scale of change that this transition would bring by looking back over human history. There have been several marked steps in our existence as a species:

  • From 100,000 BC early humans without language to 12,000 BC hunter-gatherer society
  • From 12,000 BC hunter-gatherer society to 1750 AD pre-Industrial Revolution civilization
  • From 1750 pre-Industrial Revolution civilization to 2017, with advanced technology (electricity, Internet, planes, satellites, global communication, Large Hadron Collider)

It is often said that the shock we would get if we travelled to the 2030s or 2040s might be as great as the shock a person would get if they travelled between any of these steps. If you travelled from 1750 to today, would you really believe what you were seeing? Within this century, most experts predict the arrival of artificial superintelligence, which could result in the same unimaginable shock as if a human from 100,000 BC travelled to the advanced civilization of the present day. Also notice that the time of each step grows exponentially shorter: the first step lasted nearly 100,000 years; the second about 14,000; and the third just over 250, all with similar levels of change. It should not be inconceivable that, in another 50-80 years, we experience another one of these shifts. The rate of technological progress seen in the entire 20th century was accomplished between 2000-2014, and an equal rate of  progress is expected between 2014-2021. Ray Kurzweil, whom Bill Gates called “the best person I know at predicting the future of artificial intelligence”, suggests last century’s advancements will soon be occurring within one year, and shortly afterwards within months and even days.

The Great Debate

Whether the change is for good or bad is debated by the likes of Mark Zuckerberg, Larry Page and Ray Kurzweil, who largely see ASI as beneficial, and Bill Gates, Stephen Hawking and Elon Musk who see ASI as posing a potentially existential risk. Those who see only good outcomes from ASI denounce pessimists’ views as unrealistic scaremongering, whilst those who consider the future dangerous argue that the optimists aren’t considering all the outcomes properly.

The Optimist’s View

These optimists assert that ASI will vastly improve our lives and suggest we will likely embrace transhumanism, where ‘smart’ technology is implanted into our bodies and we gradually see it as natural. This transition has already begun, largely for medical reasons — implants that trick your brain into changing pain signals into vibrations, cochlear implants to help deaf people hear, and bionic limbs. This will become more commonplace; Kurzweil thinks we will eventually become entirely artificial, with our brains able to connect directly to the cloud by the 2030s. Some think that we will be able to utilize AI to become essentially immortal, or at least significantly slow the effects of ageing.

Any task that may seem impossible for humans — like developing better systems of governance, combating climate change, curing cancer, eradicating hunger, colonizing other galaxies — optimists say will seem simple to ASI, just as building a house must seem unimaginably complex for a chimpanzee but perfectly normal to humans. Though he’s a pessimist, Bostrom says, “It is hard to think of any problem that a superintelligence could not either solve or at least help us solve.”

One concern has been about the mass loss of jobs, as AI becomes ever-more prevalent. Tasks such as driving, administration and even doctoring, which have always been done by humans, are now vulnerable to  automation. It is said that 35% of British jobs, 47% of American jobs, and 49% of Japanese jobs — the list goes on — are at high risk from automation over the next couple of decades. However, the optimists counter that jobs are merely displaced rather than being abolished with technological advancement; automation in one area just means workers retrain and move into other jobs. Bringing tractors on to farms or machinery into factories did not cause mass unemployment; those workers moved into working with the technology or found other jobs elsewhere.

The optimists contend that fears about AI (and in particular ASI) ‘taking over’ are misplaced, because, in their view, we have always adapted to the introduction of new technologies. The lack of pre-emptive regulation around AI is often cited as problematic, but Zuckerberg compared these fears to concerns about the first airplanes, noting that “we didn’t rush to put rules in place about how airplanes should work before we figured out how they’d fly in the first place.” Kurzweil says, “We have always used machines to extend our own reach.”

The Pessimist’s View

The pessimists’ fears are represented by Bostrom’s comment that we are “small children playing with a bomb”; they see the potential threats outweighing the benefits to human existence when thinking about AI. They suggest that optimists ignore the fact that ASI won’t be like previous technological advances (because it will be cognitive and self-learning) and will not act empathetically, philanthropically, or knowing right from wrong without serious research and consideration. As AI research is largely driven by the private (i.e. profit-inspired) sector, there is minimal motivation to consider future regulation, morality or even our existence. Estimates of 10-20% existential risk this century are frequently given by those in the AI field. Once AI passes the intelligence level of humanity, it is argued that we will have little to no control over its activity; those like Gates and Hawking emphasize that there’s no way to know the consequences of making something so intelligent and self-tasking, and it only pays to think extremely cautiously about the future of AI.

The general rule of life on Earth has been that where a higher-intellect species is present, the others become subjugated to its will. Elon Musk is so concerned with the possibility that  “a small group of people monopolize AI power, or the AI goes rogue” that he created Neuralink, a company dedicated to bringing AI to as many people as possible through connecting their brains to the cloud. He thinks ASI should not be built, but recognizes that it will, and so wants to ensure powerful AI is democratized, preventing the subjugation of the majority of humans to AI or a human elite. If the technology is not widely distributed among humanity, pessimists claim the likelihood is that we will become subjected to ASI, similar to the way chimpanzees are sometimes subject to human will (keeping them in zoos, experimenting on them, etc.).

There is, without doubt, potential for devastating unintended consequences as a result of ASI. Super intelligent machines could develop their own methods of attaining their goals in a way that’s detrimental to humanity. For example, a machine’s primary role could be to build houses, but it decides that the best way would be to tear down every other house to use those bricks. Or should they reach the level of intelligence to develop their own goals, they could decide to prevent global warming but determine that humanity stands in the way of protecting the environment and remove us from the equation. Pessimists argue that the idea that we’ll be able to control ASI, or shut it down when it doesn’t function the way we want, is misguided. Like chimps aren’t able to determine their existence outside of human subjection, we will be under the dominion of ASI as we grow more and more dependent upon it.

The pessimist camp expects social upheaval and inequality in the relative short-term, over the next 10-20 years, and as machines become better at a wider range of tasks, jobs will not be able to just be displaced. Several commentators (including then-President Obama) have spoken about the potential need for a universal basic income as mass unemployment becomes somewhat of a likelihood. The lower-skilled and lower-paid workers are far more concerned about AI technology than the elites, because jobs that require less training and education will be among the first to be automated en masse. Obama said, “We’re going to have to have a societal conversation about that.” Currently, however, most governments don’t see it as an impending issue, despite the possibility of a high number of jobs becoming automated in the next two decades.


Hopefully this introduction to AI has made clear where the technology is currently and could be in the near future, and highlighted some of the possible consequences of achieving ASI. As a society, we should begin thinking about what we want from our future. The advent of superintelligence — which, by essentially all accounts, is going to take place at some point this century — is not an issue to overlook, as it will have enormous social consequences and could result in our total extinction. As the last invention humanity will probably ever need to make, we should ensure that the development of this technology is matched by similar, if not more advanced, progress in regulation and philosophy.

Andy Worthington is a freelance investigative journalist, activist, author, photographer, film-maker and singer-songwriter (the lead singer and main songwriter for the London-based band The Four Fathers, whose music is available via Bandcamp). He is the co-founder of the Close Guantánamo campaign (and the Countdown to Close Guantánamo initiative, launched in January 2016), the co-director of We Stand With Shaker, which called for the release from Guantánamo of Shaker Aamer, the last British resident in the prison (finally freed on October 30, 2015), and the author of The Guantánamo Files: The Stories of the 774 Detainees in America’s Illegal Prison (published by Pluto Press, distributed by the University of Chicago Press in the US, and available from Amazon, including a Kindle edition — click on the following for the US and the UK) and of two other books: Stonehenge: Celebration and Subversion and The Battle of the Beanfield. He is also the co-director (with Polly Nash) of the documentary film, “Outside the Law: Stories from Guantánamo” (available on DVD here — or here for the US).

To receive new articles in your inbox, please subscribe to Andy’s RSS feed — and he can also be found on Facebook (and here), Twitter, Flickr and YouTube. Also see the six-part definitive Guantánamo prisoner list, and The Complete Guantánamo Files, an ongoing, 70-part, million-word series drawing on files released by WikiLeaks in April 2011. Also see the definitive Guantánamo habeas list, the full military commissions list, and the chronological list of all Andy’s articles.

Please also consider joining the Close Guantánamo campaign, and, if you appreciate Andy’s work, feel free to make a donation.

40 Responses

  1. Andy Worthington says...

    When I posted this on Facebook, I wrote:

    Here’s something a bit different – a guest post by Tom Pettinger, a PhD student at Warwick University, looking at artificial intelligence (AI), where it’s going, and whether, as inventor Elon Musk, Stephen Hawking, Bill Gates and others fear, it poses “a potentially existential risk” to humanity. It arose out of an online conversation between Tom and myself, and I hope you find it as informative as I did, and will share it if you do.

  2. Anna says...

    WOW 🙁 Ughh? What is the most suitable exclamation for this excellent introduction into AI?
    I did not read the links as this is a subject that has been scaring the hell out of me for quite some time and you can count me among the super-pessimists who think that this is a rollercoaster that cannot be stopped anymore even with the best intentions. Forgive me for giving my reasons as they pop-up in my mind, rather than logically structured.
    1. Even the most positive controllable (by us) inventions have all ended up being perverted to negative use, from Nobel’s dynamite to Ms Curie’s nuclear power and the current ‘global communication system’, internet. There is no reason to believe that it will be any different with whatever will be invented in the AI sphere.
    2. While there are some great AI achievements in medicine and for instance 3D copying for artificial limbs or rebuilding world heritage architecture, it is available to an extremely limited happy few. The rest of the world still succombs to malaria or cholera, not to mention man-made war and that will not change as was pointed out by Tom (or you? 🙂 as all this technology is in the hands of profit-driven owners.
    3. Once AI is smarter than we are, we cannot control it anymore, so even the limited accountability that remains when human beings commit crimes, will vanish. Technological advancement is growing exponentially but our own development isn’t, in particular our moral and ethical standards, so our lagging behind is growing also exponentially.
    4. We already have drones which are still controlled by humans but there are also killer robots which decide for themselves what (or who) is a lawful target.
    5. All this evidently is vulnerable to hacking and only AI will be capable of stopping this -if it feels like it. And there is no doubt in my mind, that hacking is tomorrow’s terrorism. All you need to do is turn all the traffic lights to green at the same time and you’ll create countless accidents. Obviously there is much worse to fear when hospital systems etc are attacked and I expect that much earlier than the AI doomsday scenario.
    6. Not to mention the looming universal unemployment which I happen to have mentioned in my comment earlier today to your previous post. Billions of people will continue or have to learn how to survive on subsistence farming, as long as there will be enough water, which will not be very long anymore.
    Human technological hubris without enough wisdom to manage that development has already brought this planet to the brink of destruction by global warming -with self-propagating methane release cycles also virtually unstoppable anymore and fairy-tales about colonising other planets are sickening, as evidently only the happy-very-few would benefit from this, with the remaining billions or by then trillions left behind to perish in whatever way the super AI will have concocted.

    As for Alexa :-), few people realize that she not only answers questions but in order to do that, she must listen. Listening is activated by her name, but when does it stop? How easy is it to hack her in such a way, that she listens and transmits continuously? As by the way our lap-tops, smartphones and all the other ‘smart’ appliances such as TV’s, fridges and probably prams.
    What else does she hear apart from your question about tomorrow’s weather? And it all goes into a cloud, which can be hacked. Not to mention such – arguably comical – situations:

    Some time ago I wrote this about Alexa to our common friend Martha : “Apparently she can also record inavertantly, for instance when someone mentions her name without asking any question.
    Sounds frightening.
    So here’s my Agatha Christie: I’m at home watching absentmindedly (so I later do not remember which one it was) one of the few hundred TV stations and want to ask Alexa something so I call her, but just then some awful character appears on screen and I exclaim that “one day I’ll kill that bastard, if no one else does”.
    And I forget all about my question, so all she registers is my vocal “I’ll kill the bastard” without the context.
    Later that evening, I ask her for train schedules to NYC and take the last train that night.
    Tough luck, the next morning my next door neigbour is found dead, murdered, according to the coroner the night before, probably shortly before I asked for train schedules.
    Makes me shiver, just to think what conclusions could be derived from Alexa’s – supposedly perfectly objective and factual, no emotions involved either way -‘testimony’ stored in the cloud.
    And where does that leave latter day Agatha Christies :-), having to keep up with all the latest technologies, rather than just arsenic & rat poison?
    I must be getting old.”

    Well, Tom has just demonstrated that not only old-and-behind-the-times grannies are worried.
    I appreciate the advantages of internet, but would happily give it up to go back to the ’50s, when censorship ment having to physically open your letters so it was traceable and limited.

  3. Andy Worthington says...

    Thanks for your thoughts, Anna. I like your scenario with Alexa mistaking a death threat on TV for a real one. It sounds very plausible.
    Your worries chime with those of the experts who don’t override necessary caution with their boundless optimism – and, presumably, a certain blinkered arrogance. And funnily enough, of course, although some engage in genuine philanthropy, I can’t help reflecting that, if we were to be wiped out by AI, it would be because some smug over-achiever pretended it was all about progress, when as usual, the grubby truth is that it was all about profit and spin. How persistently dishonest this late capitalist world is!

  4. Andy Worthington says...

    Katrina Conn wrote:

    Interesting and a bit scary

  5. Andy Worthington says...

    Yes, appropriately scary, I think, Katrina. It’s genuinely quite disturbing when such high-level tech figures are taking such opposing views of where we might be heading.

  6. Andy Worthington says...

    Tashi Farmilo-Marouf wrote:

    This subject fascinates me, truly.
    I kinda wouldn’t mind having my own “pleasurebot” — since I can’t find a flesh and blood one 😂
    Would giving up on this crazy humanity be so bad really? Maybe robots will be more just and peaceful than us.
    We consider ourselves so “evolved” yet many of us still torture, bomb, murder, rape and so on.
    I know we like to think we’re great but are we?

  7. Andy Worthington says...

    No, not as great as educators and entertainers are constantly trying to tell us we are through what passes for culture nowadays, Tashi, but to me that’s not the problem – it is, instead, ever-increasing unemployment with politicians unwilling to accept a universal basic income, and the more chilling possibility that our super-geeks are blithely setting up artificial intelligence that will turn on us.

  8. Tom Pettinger says...

    Anna – your point #3 is a very eloquent description of the crux of the matter – essentially why this technology could lead to our extinction! We have some moral framework, but nowhere near enough rigidity or consistency to ensure that everyone’s on the same page here, and as a result there’s little to no regard for what could happen in 20-30-40 years. I’m as terrified as it sounds like you are!! It COULD lead to the situation where disease, war, poverty are all abolished, but as you say, we’ve consistently used technology to kill each other, subjugate each other and generally screw over other humans, and when this technology thinks and acts for itself as well, I think we’ll be up the creek without a paddle.

    Tashi’s comment – for me, the reason we should be placed above robots is that we can experience subjectivity… but then there’s the debate about what counts as subjectivity and can you artificially create it! – but yes, we are a particularly effective species at being cruel to anything that moves.

    Andy, you’re spot on, it’s ALL about the money! If there was no money in it, we wouldn’t be having this discussion and talking about the possible end of the human race. For something that will have such enormous consequences either way, we need consideration and thought rather than $$$ driving us. But I fear it’s wayyyyy too late for that…

  9. Andy Worthington says...

    Thanks for your updates, Tom. I’m glad to see your article is generating some interest!

  10. Anna says...

    Oops, somehow my additonal comment about AI suddenly vanished, maybe I can recreate it.

    What do we consider ‘intelligence’ to be, the Human one and therefore also the Artificial one derived from it? Is it merely the capacity to define, analyse and solve scientific, technical problems or does it also involve emotions, moral, ethical, empathy, fear etc? We are here on the interface between physics and philosophy and even theology, which is nothing new as such.
    One might argue that human carers for sick or disabled persons also are programmed by their upbringing, education etc, but they still have an emotional capacity, be open to discussion and arguments – for better or for worse. Smart robots – or the algorithms governing them – will by definition know better and might be hermetic to any human arguments.
    I have a nearly 90 years old cousin who is very frail and coughs around the clock, but she loves to have her smoke anyway and I think she should, if that makes the little life that is left to her more valuable. The omnipotent robot might objectively decide that that is bad for her health and take away one of the few little joys left to her. As does her only son, who does resemble a robot …

    Super AI will no doubt be capable of copying and improving technical, scientific aspects. Will it also be able to copy and multiply the ‘values’? Would AI thus acquire something like a ‘soul’ and would that merely be a magnified version of that of his human creators or eventually be a self-generated one? Which would be more frightening, the greed-based one that we know his creators to have or the interpretation which the AI would develop from that?
    I simply cannot grasp this, just as I’ve never been able to grasp the notion of eternity, as in the eternal bliss in heaven for those who on earth would have obeyed the ten commandments.
    The idea that something – no matter how nice as such – would never end, would go on forever and ever, scared the hell out of me when I was a kid, long before AI entered my world.
    The school chaplain’s answer to that was ‘don’t worry, we humans simply are not capable of understanding that concept with our simple brain, but once you’ll be there you’ll have no problem understanding (and supposedly enjoying?) it’. AI as the new allmighty and infallible god?

    If I am to be scared, I prefer it to be by something whose principles I can understand – such as climate change – as that at least allows for some hope of possible counter-action before it is too late. A phenomenon that I cannot even begin to understand – let alone being able to influence it in any way except through passive resistance by not acquiring ‘smart’ gadgets – I prefer to ignore as much as I can, even if that makes me behave like an ostrich.

  11. Andy Worthington says...

    Good to hear from you again, Anna, and I appreciate how much you’ve been thinking about this, even if you’ve ended up concluding that you don’t want to!
    I laughed at your description of your cousin’s son resembling a robot, but the point, of course, is valid. How can a “logical” artificial intelligence understand the foibles that make up human experience?
    I fear that I, like you, will fail to devote enough time to the growing threat – perhaps, as Tom infers, until it is too late for all of us – but I like to think that by refusing to embrace every new technological development as though it is some kind of miracle, I might somehow be helping the Luddite struggle of humanity to stay free – and unruly!

  12. Tom Pettinger says...

    Really interesting thoughts Anna. I love the philosophy debate about what constitutes consciousness, but I’m absolutely not qualified to pass much comment except to say that there’s a lot of fascinating reading (and YouTube videos!) out there. As somebody who’s interested, I can only speculate and would suggest how would we ever know if ‘real’ consciousness has been achieved when we can’t go inside it. That goes for ANYTHING else – I can only say that I myself am conscious, I can’t speak for you or your robotic nephew-once-removed (?!), who could in fact be a not-very-well-performing drone in this matrix. And the same for you – you couldn’t know that I’m conscious myself…. you just have to take my word for it. Everything else could just be a simulation. But I’ll leave that rabbit hole there for now…!!

    The strange thing is that once AI-AGI-ASI occurs, we’ll all be fairly accustomed to intensely pervasive technology… look at how ubiquitous smartphones are / smart-tech is now, when 20 years ago basically nobody even had a basic cell phone! I had just found PAC-MAN on our new 10-pixel computer……

  13. Andy Worthington says...

    Thanks, Tom. Glad you’ve got to meet Anna online – a great friend, who took my Guantanamo film out to Poland for a week in 2011, and who also came out to the US for the annual Guantanamo protests a few years ago. This was how we met:
    So I found myself quietly alarmed by your comment that, “once AI-AGI-ASI occurs, we’ll all be fairly accustomed to intensely pervasive technology.” I keep expecting to hear about some corporation selling a new tech breakthrough that involves human implants since I first heard about people putting chips in their pets. Only a matter of time, surely …

  14. Tom Pettinger says...

    I spent a fair bit of time reading / thinking about Afghanistan after working for Adam Holloway (does Anna know him?) – and I couldn’t agree with her analysis more. Everything about Western activity there seemed a complete and utter mess, right from the rationale for invading (the basic difference between Al Qaeda and the Taleban) down to operations and the ludicrous money poured in with no effect. Billions and trillions of dollars, just to pretend that we’re helping when we were just saving our own skin at the expense of actual development there. I thought the new film War Machine did a good job of highlighting the inherent contradictions in our efforts…

  15. Andy Worthington says...

    I’m sure Anna will be able to tell you if she knows Adam, Tom – the Tory MP for Gravesham, yes?
    Thanks for the succinct analysis of the pointless Afghan quagmire. I hadn’t heard of ‘War Machine’, so will look out for it on your recommendation. I see it’s based on ‘The Operators’ by Michael Hastings, who I met very briefly at RT in Washington, D.C. before his death. He was very complimentary about my work, and his loss was really quite a shock.

  16. Tom Pettinger says...

    Yes indeed, the same, he was trying to bring sense into the madness by encouraging a political settlement there. A great summary on Afghanistan of his is here:
    Although it’s by-the-by now, we’ll leave them in just about the same state in which we invaded them… I’ve not read that book but thanks for mentioning it, it’s on order!

  17. Andy Worthington says...

    That’s a great article, Tom. Some very good writing, as well as very sharp analysis. This paragraph for example:
    “It is almost as if the international community has come to resemble a sort of self-licking lollipop – a multi-trillion-dollar machine that feeds only on itself; an alien confection that works against, not with, the grain of Afghan society. The old Bush-era mantras remain, and steely-eyed killing machines obscure steely realism.”
    I gained some serious insight into Afghanistan while researching The Guantanamo Files. That was ten, eleven years ago, and it’s absurd that we’re still there.
    Another good book is No Good Men Among the Living by Anand Gopal, who spelled out to me when I met him how we had snatched defeat from the jaws of victory in 2002. Having overthrown the Taliban and decimated al-Qaeda, we stayed and then got involved in idiotic deals with warlords who played us. Prisoners sent to Guantanamo from the summer of 2002 until November 2003, when the transfer of prisoners ended (except for later “medium-value” and “high-value detainees”) had already revealed this to me, but Gopal’s work makes it clear.
    Review here:

  18. Tom Pettinger says...

    Nice one, cheers for the recommendation, also ordered – my summer holiday reading is sorted! You certainly know some fascinating people!! I thought this comment was particularly appropriate:
    “Gopal’s book left me looking back, feeling regrets and “if onlys.” If only the United States had better understood the Afghan people, if only we knew then what we know now” — it never ceases to amaze and appal me how committed we are to sowing uninformed and very expensive destruction all over the world. We’re phenomenally effective at it…

  19. Andy Worthington says...

    I met Anand once, at a cafe in NYC, before his book came out. He wanted to meet because he’d drawn on my research about the Afghans in Guantanamo, and wanted to compare notes. I wanted to follow up with him on more detailed research into the former prisoners, to establish how the recidivism statistics coming from the Director of National Intelligence, and uncritically picked up by the media and Republicans, couldn’t possibly be correct, but unfortunately that never happened, and now it seems rather less essential, as Trump is like a giant boulder – or fat ball, more accurately – blocking the entrance to Guantanamo, and nuanced discussions are a thing of the past.
    Anyway, yes, a really great book.
    As for regrets and ‘if onlys’, the comment you quoted made me think how important our essential natures are – or the outlook we develop at a young age. Mine is both relentlessly questioning (and especially of authority), and, connected to that, one of extreme caution when it comes to effecting all kinds of major change. As a result, I’m always too informed and suspicious to endorse any kind of proposed change that involves military actions, as I see through its supposed justifications, and see the truth instead, which always seems to involve some revival of colonialism/imperialism, and always fails to recognise that, if we invade other people’s countries, they will feel at least as upset as if someone invaded our country. The failure to do so is very telling about our racism and outrageous sense of superiority.
    My suspicion and caution also leads to numerous non-military decisions – like my implacable opposition to Brexit, for example, which is another classic of wishful thinking and a noxious bag of bullshit concocted through a completely deluded nationalist prism of pure delusion!
    I hadn’t really formulated the twin poles of my worldview until I responded to your comment, so thanks for that!

  20. Anna says...

    What a day full of inter-twining coincidences. Weird to re-read that first mail of mine, were it only because in October 2009 I worked in a relatively well-off Scandinavian NGO with as good internet as was available then (to civilians that is) and I had forgotten how unreliable and excruciatingly slow it was :-). Sadly I would not change anything to its contents though, although since then I have come to the conclusion – as the only logical explanation for by now almost 16 years of what otherwise would be unbelievable fumbling – that the underlying reason for US/NATO’s relentless efforts to keep that ‘war’ alive, are the sprawling and very solid military bases which they have built there and from which they can militarily control the neighbouring countries. Those being Iran, Pakistan, India, former parts of the Soviet empire which are still closely linked to Russia, and even China …
    Who would willingly give up control over such a strategically located country, which is so utterly defenceless that there isn’t even any need to negotiate the terms? The only thing that still puzzles me, is whether all the bloody ‘coalition partners’ in this colonial gang-rape are aware of this objective – would it be openly discussed during NATO summits? – or are they just blindly relying on the mostly US ‘experts’, who feed them all those increasingly irrational rationales for having to keep on ‘surging’ there in spite of the lack of any progress worthy of that name?
    And that, I’m afraid, seals the fate of that tragic country. And of course also provides a perfect opportunity to try out new weaponry before selling it to Saudi Arabia et al as ‘field-tested’ (that’s how Israel advertises its arms-for-sale after using it on Palestinians). And since we’ve bribed the Afghan government (oh, those terrible corrupt Afghans!) with a few billion Euro’s into taking back its refugees, we send them back in droves. Particularly young men (“they’re all krypto-terrorists”), who then risk ending up in the ranks of the taliban or nowadays even ISIL, depending on what part of the country they originate from.
    And speaking of refugees, here’s the next link in today’s chain :-). I do not know Adam Holloway, so I googled him and found this rather off-putting information about his attitude towards refugees/migrants, which he later tried to mitigate but I do not buy such retractions:
    No major problem with his take on Afghanistan apart from the idea of “separating ordinary people and non-ideological fighters from the hardcore Taliban”, not only about as feasible as sorting the individual sprigs in a haystack by let’s say their circumference, but also a bit too much like the ‘moderate & fanatic’ muslim narratives.
    Ah, Marjah, the famous or rather infamous ‘surge’ which would signal the beginning of the final victory! Touted well in advance (what happened to surprising the enemy by keeping surges a secret?) as if it indeed was about Stalingrad, while Marjah apparently is a small sneeze-and-miss-it rural town, which our invincible forces managed to hang on to for two whole weeks! And a feed-back loop to ‘Afghan corruption’, as the marines received 250.000 USD in 500 Afs (roughly 10 USD) bills, to distribute among ‘the locals’, so as to open their hearts & minds to the idea of being liberated by the US (and presumably UK as it was Helmand) army: (this is the only way I know to include the relevant picture, which is indeed from that surge). Hate to think of the fate of the recipients of that mannah once the liberators had abandoned Marjah back to the – revengeful – taliban.
    And to finish the deeply depressing Afghanistan subject : in fact we leave them (provided we were doing that) in a much worse state than we found them in 2001 … Security has decreased dramatically, much of our advertised ‘development progress’ exists only on paper (the real one was by-and-large brought about by the Afghans themselves) and the wonderful atmosphere of cautious optimism and hope which I found there about one year after the fall of the taliban, has now morphed into increasingly desperate morosity. Not to mention over 2 million drug addicts as compared to less than 2.000 in those days and sky-rocketed opium production. And massive unemployment, that fertile breeding ground for any sort of radicalisation.

    Without in any way wanting to minimize the horror of the Soviet occupation (‘only’ nine years), they did at least build apartment blocks which still are in high demand, factories which provided employment and generated income, a dam for hydro-electric power, education, a cultural centre and other true development in various fields, albeit mostly in bigger cities. In practically 16 years, we have not even achieved that …

    Back to the original AI subject :-), I happened to see today Werner Herzog’s 2016 film ‘Lo and Behold’ about IT, which I recommend. Don’t be put off by its boring beginning with a couple of the pioneers praising their own achievements, it quickly gets really interesting and Andy, I know you’ll love Herzog’s final ‘statement’ at the very end of the film.
    Just seeing two boyishly enthusiastic neuro-scientists talk about the horrors they would like to invent for us out of sheer scientific curiosity, makes it worth watching it.

    As for summer reading, I today got Naomi Klein’s ‘This changes everything’ about climate change (and no doubt much more) and – Lo & Behold 🙂 – she appeared tonight on Mehdi Hasan’s AJE ‘Up Front’ about (how to fight) Trump, apparently the subject of yet another book of hers. Don’t miss it, the other subject was about Hollywood and US TV being influenced – and worse – by Pentagon & CIA. And that closes the chain of coincidences for today :-).

  21. Anna says...

    Forgot to add, in Herzog’s film I first discovered ‘solar flares’ and what they can do to the internet, and thus to us …

  22. arcticredriver says...

    I often quote a pithy comment one of my computer science professors (Dr. Morven Gentleman) used to say, thirty-five years ago.

    “In my experience ‘intelligent devices’ are not characterized so much by intelligence, as by a certain kind of low animal cunning.”

  23. Andy Worthington says...

    Great to hear from you, Anna.
    Lots to digest here. I think your analysis of the reasons for the endless occupation of Afghanistan are very powerful – about it being a strategically important place to exercise military control over all the surrounding countries (or, at least, to create that illusion), and I too wonder to what extent the US’s allies are fully on board with the notion of this “colonial gang-rape”, as you so rightly call it.
    I am also disappointed by the position taken by Adam Holloway in that link you sent. It suggests to me that people fleeing great danger are supposed to pack up and return home if it appears that the situation in their home country has improved, when the reality of uprooting oneself is that people can find themselves permanently transplanted through a combination of factors, and that no one else should have the right to imperiously dictate what they should do. That’s my reading of it, anyway …
    I found myself particular depressed by your comparison of the Soviet and US occupations, and the realisation that, yes, of course, the former involved creating a functioning society and infrastructure, whereas the US are, in contrast, little more than vandals.
    As for Herzog, I shall have to have a look for ‘Lo and Behold!’ You also reminded me that I have never seen ‘Lessons of Darkness’, the film he made after the first Gulf War, featuring Kuwait’s burning oil fields, which has always intrigued me. It’s on YouTube here, but that’s probably not the best format for watching it:

  24. Andy Worthington says...

  25. Andy Worthington says...

    Ha! Thanks for that, arcticredriver!

  26. Tom says...

    While it’s important to be aware of changes and future possibilities, remember one thing. Human beings are the ones who come up with concepts, designs and write code. Even if you’re using supercomputers, do they design themselves and write their own code? If someone’s developed that, I sit corrected. But I don’t know of any.

    There will always be a human being involved in the process.

  27. Anna says...

    Andy, ‘it has history’ certainly is an understatement :-).
    What worries me is the future, this phenomenon’s scary probability. So we barely avoided a technological implosion just five years ago (!) and was the world informed about that? Wonder how big the statistical chance is that one of those solar CMEs heads straight for us rather than barely pass us by? If I were more biblical, I would say that the last ones will be the first ones. Those who now live without electricity and grow their own food – those who live and toil not to accumulate gadgets but to survive – will be least affected.
    There would be some basic justice in that I suppose. I many times marvelled when in some village at the end of the world, at the absurd idea that somewhere a few thousand miles away there were places like NYC, where millions of people were running around like madmen to earn enough money to buy the countless gadgets they could easily live without. It really is hard to imagine ‘in-the-middle-of-nowhere’ and the people who live there without TV or even radio certainly cannot. Almost 40 yrs ago someone in such a village in Asia asked me how far away my country was and I realised I would have to translate bird-fly 10.000 km into walking days. When our powergrids would go off for a long period of time in places like NYC, it would not mean a thing to such villagers. Come to think of it, they would have been as unconcerned and unmoved by it as most of our society is when millions are starving in Yemen, except they had no way of knowing or even imagining our world.

  28. Anna says...

    Arcticredriver’s professor had great insight. As for the Kuwaiti Herzog, for copyright reasons not available ‘in my country’, but the trailer is eery enough.

  29. Andy Worthington says...

    Yes, I think the fear, though, Tom, is that the machines take over the entire process.
    Here’s an open letter, ‘RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE’, signed by over 8,000 individuals in the AI field, which includes the following concern: “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.”

  30. Andy Worthington says...

    Hi Anna,
    Yes, when you compare the global poor with neurotic westerners, it really shines a light on the monstrous scale of the default self-absorption of the West, so far removed from the daily challenges of survival. I do find it depressing, however, how, for the most part, people cannot imagine any other reality. I recently met a lifelong green activist, and we ended up talking about what would happen if there was a sudden widespread understanding that we needed to put the environment first, rather than the ceaseless drive for capitalist profit, and of course it would be exciting, a breath of fresh air, as well as, obviously, something that in other ways would be daunting and challenging. Imagine a world where we counted the real cost of our consumption, where we agreed to limit the dominance of the car, where we refused to accept that making a profit was the only measure of the worth of anything …

  31. Andy Worthington says...

    Yes, I suspected there’d be copyright issues, Anna, but I thought I’d put it up in case any curious British readers alighted on the page. It strikes me as slightly odd how universal so much information is and yet some entertainment corporations retain cross-border controls on videos, and DVDs are still globally divided by region. I occasionally forget to check, in visits to charity shops (which I frequent regularly in search of books and DVDs), and end up with region 1 DVDs (American), which, of course, are completely useless in the UK.

  32. Andy Worthington says...

    Well, how timely is this? In the Guardian, ‘Elon Musk: regulate AI to combat ‘existential threat’ before it’s too late’:

    Tesla and Space X chief executive Elon Musk has pushed again for the proactive regulation of artificial intelligence because “by the time we are reactive in AI regulation, it’s too late”.

    Speaking at the US National Governors Association summer meeting in Providence Rhode Island, Musk said: “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.

    “It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation.”

    Musk has previously stated that AI is one of the most pressing threats to the survival of the human race, and that his investments into its development were made with the intention of keeping an eye on its development.

    “AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late,” Musk told the meeting. “AI is a fundamental risk to the existence of human civilisation.”

  33. Andy Worthington says...

    Tashi Farmilo-Marouf wrote, in response to 7, above:

    Basic income is only humane, Andy. Yet, certain humans are selfish and greedy and believe that those who “earn” are deserving of rewards while ignoring how those earnings are often achieved or how unequally then are apportioned or how little they relate to work itself.
    Take bankers, for example, they get huge bonuses but do they work harder than a street cleaner, nurse, waiter, clerk, etc.?
    Why do CEO’s and their employees have such a huge pay gap?
    Why are women still paid less than men for the same jobs!
    AI may balk less, maybe that’s why they are desired… They won’t fight for their rights!
    But we humans have a long way to go before we even learn to treat eachother well and respectfully before we are “good” enough to create another kind of being. Do we really want to impart these awful aspects of ourselves into another creature? Ugh

  34. Andy Worthington says...

    I just saw your reply now, Tashi, and in brief, yes, it has to be troubling that we are meant to trust those introducing advanced AI when, as you say, the humans involved will be bringing their often extremely unhelpful baggage with them. It also worries me profoundly that, however, much scientists may be nominally driven by supposedly “pure”, objective notions of scientific research, their funding will, for the most part, come from people whose interest is in making a profit.

  35. arcticredriver says...

    There was a guy named Joseph Weizenbaum, an early researcher in Artificial Intelligence, best known for developing the often copied program called “Eliza”, who wrote a book entitled Computer Power and Human Reason: From Judgment to Calculation, after he became disillusioned.

    Some of the most memorable lectures from my time at University were the informal lectures visiting scholars delivered to the Computer Science Club. Weizenbaum delivered one of those lectures. He spoke about his disillusionment coming to a head during a sabbatical year, and his interactions with Social Scientists at the institution where he spent that sabbatical.

    He had developed a program that could respond to text submitted to it by humans with responses of its own. He and his grad students developed a series of scripts for this program, each intended to be from the point of view of a human, filling a particular role.

    Eliza, the “Doctor” program was the only script that proved interesting. I remember one of the others was a traffic cop.

    Eliza was designed to respond to the human like a Rogerian psychotherapist. In Rogerian psychotherapy the therapist turns around the patient’s statements, and turns them into open-ended questions. The program could recognize the nouns and verbs well enough to turn things into questions, without understanding them at all.

    That is when the trouble started. Some genuine therapists, some of them prestigious, started to write about the days when they wouldn’t have to listen to patients at all, or only under special circumstances. Instead they would monitor summaries of patient’s dialogues with robot therapist.

    Weizenbaum said he was shocked that any actual therapists would fail to recognize the human judgment required for a genuine connection between therapist and patient.

    Carl Sagan, generally beloved, was one of those who spoke publicly about the promise of therapy programs like Eliza.

    He coined a term for those former colleagues he felt had gone too far. He called them the “Artificial Intelligentsia”

  36. Andy Worthington says...

    Well, that’s a clever turn of phrase, arcticredriver – the “Artificial Intelligentsia.”
    So you’re saying that Carl Sagan was criticising colleagues for NOT believing in what he saw as the promise of the program? I wasn’t entirely clear what you meant by him coining a term for those “he felt had gone too far.” Did you mean “gone too far” in not believing in the program?

  37. arcticredriver says...

    Sorry, it was Joseph Weizenbaum who felt both his colleagues, fellow AI researchers, and fans, like Sagan, had gone too far, and risked dehumanizing us, by empowering automated processes, like robot mental health counselling, that stripped us of human rights.

    Sagan spoke positively of the future of robot mental health counseling. I heard him give a talk where he used a clever but made up dialogue of a skeptic’s session with the robot doctor, that ended with the skeptic explaining his worries about the future of automation, in general, and robot psychiatry, in particular. He had his skeptic explain his concerns to the robot doctor, telling it that robot doctors had no insight, and could never bring anything new the session, to which he has the robot doctor respond with, “don’t you think it’s strange that, during this entire conversation, you haven’t once mentioned your mother or your father.”

    Sagan got a hearty laugh from the audience. The robot doctor was programmed to throw in a few zingers, but that didn’t alter that it was, as Weizenbaum acknowledged, just a bag of tricks — with no genuine insight, or even anything approaching what us humans call understanding.

    That talk was at the University of Toronto. I don’t know why, since Jacob Bronowski spent almost all his life in Britain, but he donated all his papers to UofT. They (used to) get a famous person to give an annual lecture in his memory. Around 1980 it was Sagan.

  38. Tom says...

    Fear, being aware and cautious are all valid. But I go back to what I said before. While I’m not a world IT expert, I do try to keep up with the latest developments in software, security and design. That being said, think of the super computers used by the CIA. The NSA. GCHQ. Yes, big budgets and computing power. But without humans to write the billions of lines of code and maintain the system, it’s nothing.

    On the other hand, if it is now possible for AI systems to think and reason just like humans do, nobody’s going to admit that they have that. Forget the soundbites from Trump and May about the “special relationship” between the US and UK. If May knew that GCHQ had this capability, would she actually share that with Trump? I wouldn’t. Why? Because this would be one of the biggest tech advances in history. The UK would be literally the most powerful nation in the world. If May and the Deputy PM (forgot their name, sorry) refused to tell anyone else high level in the UK govt., it would be like a scene out of “Spooks”. Sir Harry Pearce tells the Deputy PM, why haven’t you told any other Cabinet members or other high level officials? By not telling us, you’re making us look like schoolboys.

    The main reason Elon Musk is hyping this issue in the press is competition with Jeff Brazos, Amazon and Zuckerberg and Facebook. All 3 want to have global power.

  39. Andy Worthington says...

    Thanks for the update, arcticredriver.
    Here’s the latest from the AP about the disagreements between Elon Musk and Mark Zuckerberg, following Musk tweeting that the Facebook founder’s “understanding of the subject is limited”, an analysis that I believe is accurate:

  40. Andy Worthington says...

    Thanks for your thoughts, Tom. I take Elon Musk’s fears at face value, along with those of Stephen Hawking. They have much more scientific credibility than Zuckerberg and others defending AI.
    As for the British deputy PM, we don’t have one! Check this out!
    May’s friend (her only friend?) Damian Green was appointed First Secretary of State and Minister for the Cabinet Office after June’s disastrous election, which is the closest we currently have to that role.

Leave a Reply

Back to the top

Back to home page

Andy Worthington

Investigative journalist, author, campaigner, commentator and public speaker. Recognized as an authority on Guantánamo and the “war on terror.” Co-founder, Close Guantánamo and We Stand With Shaker. Also, photo-journalist (The State of London), and singer and songwriter (The Four Fathers).
Email Andy Worthington

CD: Love and War

The Four Fathers on Bandcamp

The Guantánamo Files book cover

The Guantánamo Files

The Battle of the Beanfield book cover

The Battle of the Beanfield

Stonehenge: Celebration & Subversion book cover

Stonehenge: Celebration & Subversion

Outside The Law DVD cover

Outside the Law: Stories from Guantánamo


Posts & Comments

World Wide Web Consortium



Powered by WordPress

Designed by Josh King-Farlow

Please support Andy Worthington, independent journalist:


In Touch

Follow me on Facebook

Become a fan on Facebook

Subscribe to me on YouTubeSubscribe to me on YouTube

The State of London

The State of London. 16 photos of London

Andy's Flickr photos



Tag Cloud

Abu Zubaydah Al-Qaeda Andy Worthington British prisoners Center for Constitutional Rights CIA torture prisons Close Guantanamo Donald Trump Four Fathers Guantanamo Housing crisis Hunger strikes London Military Commission NHS NHS privatisation Periodic Review Boards Photos President Obama Reprieve Shaker Aamer The Four Fathers Torture UK austerity UK protest US courts Video We Stand With Shaker WikiLeaks Yemenis in Guantanamo