5 Disabling Barriers New Tech Is Helping To Smash Down For The Physically And Developmentally Impaired

assistive tech

Jenny Morris –  a disabled feminist and scholar –  has argued that the term “disability” shouldn’t refer directly to a person’s impairment. Rather, it should be used to identify someone who is disadvantaged by the disabling external factors of a world designed by and for those without disabilities.

Her examples: “My impairment is the fact I can’t walk; my disability is the fact that the bus company only purchases inaccessible buses” or “My impairment is the fact that I can’t speak; my disability is the fact that you won’t take the time and trouble to learn how to communicate with me.”

According to Morris, any denial of opportunity is not simply a result of bodily limitations. It is also down to the attitudinal, social, and environmental barriers facing disabled people.

For those of us without impairment, it can be difficult to understand the complexity of these challenges. Perhaps most tangible are those we can observe, like the way in which our constructed environment can overlook the physical needs of certain individuals – tall buildings without elevators, pedestrian crossings without audio cues, videos without captioning, etc.

Though we all know by now that the tech sector has had a pretty damaging 2017 from an ethical perspective, it is still important to note some technologists have been making noteworthy efforts to address such obstacles and open-up previously inaccessible (and difficult-to-access) worlds.

Though social and attitudinal barriers continue to create a disabling effect, here are five areas where new tech and AI is bringing barriers down:

  1. Devices 

The clean lines and touch screens of tablets and smartphones are easily amongst the most popular tech breakthroughs of the last century. However, at first glance these sleek, smooth interfaces look completely inaccessible to blind and partially sighted individuals who rely upon textured surfaces to communicate and consume media. Thankfully, this is not the case. Despite the launch of iPhone initially sending the blind community into a tailspin, smartphones have actually enabled people with sight impairments to keep pace with the rest of us when it comes to tech. And all without having to purchase costly or clunky add-ons.  Indeed, advocates for the blind have even said that smart devices could be the biggest assistive aid to come since Braille was invented in the 1820s.

But how so?

Built-in accessibility functions (which many of us use…), like voice control, allow those with vision problems to look things up using search, or to compose texts. Additionally, the visually impaired can use the GPS to navigate, and the camera to determine the denomination of cash. Those who are progressively losing their sight can also increase brightness, invert text color, and enlarge the text characters for better clarity.

Moreover, smartphones now support tens of Bluetooth Braille keyboards, and companies like Ray have developed assistive mobile applications and textured adhesives to allow sophisticated, eye-free device control,

  1. The internet and social media

Having a functional and operable device is one thing, but it is also important that users can access and communicate via the most popular internet platforms. The alternative is damaging social exclusion. Fortunately, some really promising work is already being done to level the playing field and to create better accessibility online.

IBM, for example, have leveraged some of Watson’s computing power to create Content Clarifier, an artificial intelligence solution which employs machine learning and NLP to make reading, writing, and comprehending content easier – which is great news for people with autism or intellectual difficulties. The system switches out complicated content and filters away unnecessary detail. So, idioms like “it’s raining cats and dogs” will be converted seamlessly to “it’s raining hard”, helping users avoid confusion (and frustration) when using the internet.

Also, as many will have read recently, Facebook have just improved the way in which the blind and visually impaired can use the site. Last month, the company revealed that it will be deploying facial recognition technology to identify people, animals, and objects, and use them to annotate photographs with alternative text. Previously, hovering over pictures would only reveal the number of people in them, and prior to that technology would only identify “photo”.

Lastly, back in March, YouTube rolled out artificial intelligence that builds upon its existing auto-captioning for the deaf. The new tech can now identify more complex noises like applause, laughter, and music in videos. Just like Facebook’s facial recognition technology, this also relies upon machine learning, and could ultimately pick out barks, sighs, knocks, etc., improving the service for those with hearing impairments.

  1. Gaming

This is another area where a lack of dexterity, limb loss/paralysis, or a visual impairment would’ve historically prevented would-be gamers from accessing mainstream titles. Now companies like Natural Point are manufacturing special use joypads with built-in head or eye-control to enable a whole new range of users to enjoy video games. The same is true of Quadstick who produce gaming tools for quadriplegics, giving players control via a clever sip/puff sensor.

One-switch games (which are what they sound like) are also available to those with severe disabilities, whilst mainstream gaming giants like Xbox now sell accessible (and reasonably priced!) controls for their systems at Walmart. Oh, and this looks like it could be fun for those with limited upper body or hand mobility who are keen to get stuck into virtual reality.

  1. In-person communication

Though it’s great that tech is improving itself (so to speak), and addressing accessibility issues from the inside, artificial intelligence also has a key role to play in tackling the disabling barriers “out there” in the real world.

SignAll Technologies, for example, is currently piloting an exciting product which uses sensors and cameras to track and automatically translate sign language in real-time. If successful, this could dramatically improve communication between deaf and hearing individuals.

At the same time, tremendous work is being done by The Open Voice Factory, who develop electronic speech aids which allow those with speech-language impairments to make themselves understood. The software they make “converts communication boards into communication devices”, and can run on any platform (tablet, smartphone, laptop…). Best of all, it’s completely free!

  1. The physical environment

Perhaps most challenging for someone with a physical impairment, is traversing the many obstacles of an outdoor environment. Here too though, tech is making new gains.

Arguably, the most elaborate of recent developments is Hyundai’s exoskeleton, which had its “big reveal” in December 2016. Though we’re unlikely to see these on the streets any time soon, it certainly demonstrates the promise of robotic development when it comes to improving navigation of the physical environment for paraplegics and others who currently require assistance.

In the same vein, another conversation that is heating up concerns self-drive wheelchairs – for which the technology already exists. A self-driving chair uses an existing powerchair as its base, and allows the user to maneuver around without having to physically operate the wheels, which can be stressful and tiring. Many within the disabled community are urging more investment in this technology, and ultimately hoping for a widespread roll-out.

Finally, a number of technologies are also coming onto the market for blind and vision impaired users, some of whom also find themselves at a disadvantage when it comes to getting around safely. One such product is Aira, which uses special glasses and augmented reality to help a (real live) service agent guide a user around an unexpected obstacle. The service agent can access cameras mounted in the user’s glasses and literally act as their eyes in order to give live support.

Of course, this is not an exhaustive list of assistive technologies. Nor is an exhaustive list of the barriers facing disabled people, many of which are – as previously stated – social or attitudinal, rather than merely physical.

Equally importantly, it is not a call for the redemption of big tech after an illuminating year of closer forensic inspection revealing a disappointing disregard for ethics.

This rundown simply highlights and heralds some exciting developments – the likes of which we should be championing and encouraging – that are likely to improve people’s lives by facilitating independence and overcoming several of the exclusionary features of our world.

If 2018 turns out to be a year in which the ethics of tech is front-and-center (as it is certainly looking to be), then we should make sure the focus is not simply on cleaning up unethical environments and practices, but also considering what ethical good tech could, and should, be creating. To ensure – as Jenny Morris says – that anatomy is not destiny.


Want Artificial Intelligence that cares about people? Ethical thinking needs to start with the researchers

We’re delighted to feature a guest post from Grainne Faller and Louise Holden of the Magna Carta For Data initiative.

The project was established in 2014 by the Insight Centre for Data Analytics  – one of the largest data research centres in Europe – as a statement of its commitment to ethical data research within its labs, and the broader global movement to embed ethics in data science research and development.

Magna Carta For Data 1

A self-driving car is hurtling towards a group of people in the middle of a narrow bridge. Should it drive on, and hit the group? Or should it drive off the bridge, avoiding the group of people but almost certainly killing its passenger? Now, what about if there are three people on the bridge but five people in the car? Can you – should you – design algorithms that will change the way the car reacts depending on these situations?

This is just one of millions of ethical issues faced by researchers of artificial intelligence and big data every hour of every day around the world.

And it’s not just researchers. Say you’re a parent and your child’s school is taking part in a national data project to track the health status of children. You believe the project is valuable and give consent for your child’s lifestyle and health information to be collected. However, a few years later, your child reaches 18 and doesn’t want her early health and lifestyle profile to be available to researchers. How do we sort that issue out? Withdrawing the data might compromise the project which will benefit many, consent has been given – can your child withdraw that consent?

These are issues that need discussion and examination. Politicians, philosophers, ethicists, lawyers, human rights experts, technology designers and a multitude of others all agree that we need to be aware of these issues, that we need to protect human values in the age of artificial intelligence, and yet, our thinking is not keeping up with the rate at which these technologies are being developed.

How do you plan for the future when you don’t know what questions and issues are coming down the line? How can you put a framework in place when something unprecedented might be around the corner?

These are the questions that are paralysing the ethics conversation around the world. It’s all too complicated and too complex and yet, we can’t allow this to get away from us any more than it has already. So where on earth do we start?

Well perhaps we should start with the ethical questions that already exist? It sounds simplistic, but very little of this examination is happening. The problem is the people who develop the technology and the people who develop ethical thought and theory tend to live in different academic worlds. They speak different languages. So how do we bring them closer together?

We should be asking researchers working in AI and Big Data around the world about the ethical issues they face in the course of their work. We should then make those questions available to the people whose job it is to find solutions to ethical problems.

We have been looking at AI and big data research from the top of a mountain, trying to catch it all in a big ethics framework. We need to keep doing that, but we also need to get down on the ground and find out what ethical issues researchers are dealing with today. If we know what issues are arising today, we will be more prepared for the unknown down the line.

A new website www.magnacartafordata.org is attempting to do exactly this. It is gathering real life case studies and experiences of ethical issues from data and AI researchers and making them publicly available for the general public and researchers from different disciplines to read over.

The issues that arise for individuals tend to be less dramatic than that of the self-driving car conundrum.

Take researchers who use Twitter data, for example. To open a Twitter account, you have to agree to the Twitter terms and conditions. Did you know that by doing that you have consented to your data being used for research purposes? While researchers are allowed to use your data, a lot of them feel slightly uncomfortable about it because they don’t believe that ticking a box at the end of a document nobody reads is proper consent. This is an issue that comes up pretty frequently.

Or how about someone who collects social media data to track interactions that happen between communities of people who have, say, an eating disorder? The researcher could have great intentions of helping these communities, but could a health insurance company take that same dataset and use it to determine who may have had bulimia as a teenager? The unintended, negative consequences of big data and AI research is another issue that weighs on the minds of researchers.

By gathering these sorts of questions, we are learning about the details and subtleties of the current ethics landscape. We are providing a place for researchers to air these concerns, and we are making those concerns available to experts who can help.

Interestingly, we are finding that researchers are beginning to talk directly with ethicists, and come up with solutions to ethical issues themselves. We’re essentially crowdsourcing solutions to ethical problems.

It’s not the whole solution, but it’s certainly part of it and we’re excited to see what the future brings.

Follow the Magna Carta for Data project on @DataEthicsIre.

Should you have sex with an AI?

It might not be a question you’re asking yourself right now, but according to a California-based developer of artificially intelligent sex robots, they will be soon be as popular as porn.

sex robots

This is, at least, the hope of Matt McMullen. He’s the founder of RealDoll, a “love doll” company featured in the documentary, “The Sex Robots Are Coming”.  The film seeks to convince its audience that combining undeniably lifelike dolls like Matt’s with interactive, artificially intelligent features will lead to an explosion in the market for robotic lovers.

But is this okay? Many say that it absolutely isn’t.

Indeed, just as the Campaign To Stop Killer Robots gathers pace in the media, a lesser known campaign is trying to make its voice heard: The Campaign Against Sex Robots. And they’re just as serious about their issue.

Those in favor of sex robots tend to mount familiar-sounding arguments. They say that access to lifelike, feminine AI (McMullen says 80% of his custom is for female dolls) will help reduce the use of prostitutes, and therefore protect many vulnerable women and children who are forced into an underground and often brutal industry.

You may note that this is the same claim that has been used for many years to defend all types pf pornography, and the use of more “traditional” sex dolls. Yet – as everyone knows – the sex trade has continued to expand exponentially, and sex-trafficking has become an international crisis of some urgency. Whatever purpose sex products serve, it appears that the protection of sex workers is not one of them.

Indeed, The Campaign Against Sex Robots claims that the tech boom has actually supported and contributed to the growth of the sex industry.

Campaigners at the fore also have another worry. They’re concerned that the blending of sex and AI will to further reduce human empathy to dangerous effect. Empathy, they argue, being something that requires the experience of a mutual relationship.

The blame for the erosion of this critical human instinct does not sit uniquely with sex robots, but is also connected to the kinds of video games that reward the killing of prostitutes (i.e. Grand Theft Auto), and virtual reality that simulates the sexual abuse of children.

These are all cases where proponents claim that the recipient (of sex, abuse or sexual abuse) is not real, and therefore the act cannot constitute a real harm. The Campaign Against Sex Robots would argue that the ensuing attitudinal knock-on effects are very real indeed. Especially the implications for real life sex workers, to whom sex robot users may “graduate”, rather than replace.

If we learn through practice that sexual partners are “things” that exist for our satisfaction (however unpleasant the act of satisfying is…), there is no reason why that general mindset wouldn’t be transferred to human partners.

Which leads to the most obvious objection to these AI love dolls: if we embrace a world in which sex with these (almost always) cartoonishly female AI is normalized, we are exacerbating the universal objectification of women. Something that many, many women have fought against over many, many years.

I suppose the question is, in taking some technological steps forward, do we not run the risk of taking many societal steps back?

Watching the documentary, it felt as though the makers of “Harmony” (a talking, interactive love doll) were trying to create something as close to a female as possible, without any of the unattractive characteristics of a real woman: extra body weight, wit…resistance. These are the women that will never say no, always look fabulous, and never ever mount a challenge.

In other words, the makers of AI sex dolls reimagine and very openly reduce an “ideal woman” to body parts and a combination of sexually provocative and submissive responses.

In spite of these objections, some are keen to highlight the potential benefits. AI expert and author of Love and Sex With Robots, David Levy told Time magazine:

“I think the really massive benefit is that there are millions of people in this world, who for one reason or another cannot make good relationships themselves with other human beings. And so they’re lonely and miserable. I think when they’ve got the option of having relationships with very sophisticated robots, that will for many of them fill a big void in their lives and make them much happier.”

So, who do we think of? The lonely few? Or could the generations of young women who could be brought into a world where adults own lifelike AI, built to look just like them but manufactured and sold exclusively for sexual purposes?

Some will certainly contest that the anti sex robot movement leverages arguments that could be seen to condemn all pornography, which seems extreme, if not puritanical. Truthfully, they could be construed that way, but surely this depends on whether we believe there’s a distinction between watching and doing? Between viewing the human form and owning a human form? To be blunt, between masturbation and sex.

I think most of us would be more favorable to the idea that someone living in their home had viewed certain websites discreetly on a laptop or tablet, and less favorable to the idea that “Harmony” was charging underneath the bed.  There is, surely, a psychological difference for both parties – and, indeed, for David Levy’s sexually frustrated loners.

This week the media reported that new guidelines were released for the ethical development of AI robots. I’ve yet to leaf through the 266 pages, but critically these guidelines refer to design, i.e. making sure robots don’t do anything unethical, like exhibit bias against certain humans or nudge people to take bad decisions. Perhaps – as with sex robots – we should also be thinking about the uses of outwardly innocuous technology. How should this AI be presented so that it is consistent with the norms and values of modern society?

Harmony, after all, is merely an robotically embodied chatbot – much like Sophia, Saudi Arabia’s robot citizen. The fact her program is sexual is not, on its own, hugely problematic. She only really becomes troubling when you consider three factors combined: her (convincing) submissive and sexual AI “mind”, her (convincing) anatomically correct, single-purpose robotic body, and – perhaps most relevantly – the attitude of her “users”, who actively and deliberately seek to conceive of her as a real human.

Somehow, as separate items, none of these seem quite as disturbing as their sum of their parts. It would be intriguing to know if either of the first two were in any way altered, whether this would destroy the illusion that manufacturers and their buyers want.

In other words, is this really a sex aid for men whom women dislike? Or for men who dislike women?

Online choice “nudge” and the convenient encroachment of AI


The beginnings of the internet seem so long ago to those of us who lived through them. Hours spent trawling through pre-Google search results, which often ranged from the useless to the bizarre. Blindly researching gifts and listening to music, sans intelligently selected recommendations.  Checking social media accounts of our own volition, rather than through prompting from “notifications”.

Then the world began to change.

Under the banner of convenience, clever algorithms started to adapt both to our interests and – critically – the interests of commercial entities. We saw (or rather didn’t see) the covert introduction of the digital “nudges” that now regularly play upon our cognitive blind spots, and work to “guide” our decision-making.

These nudges work by manipulating our online “choice architecture” to gently shepherd us toward certain purchases or endorsements (usually those preferred by the tech giants). They do this in a variety of ways – by framing these options appealingly, by incentivizing clicks and likes, by making a choice seem popular or easier, or simply by giving us their preferred option as the default.

All of a sudden, Google searches were structured to promote those with the most advertising muscle, social media feeds became more exciting, glamorous, and ad-filled, e-commerce sites filled their landing pages with items they presumed we (specifically) might like to buy, and hotel websites induced panic by telling us how many people had already booked our dream holiday.

Still now, when we browse the internet we receive these subtle pushes and prompts inviting us to behave in certain ways.

For a short while, these clandestine little nudges only worked if we actually logged on. We had to browse the internet or check our emails to become subject to their charms. The rest of the time we could just live our lives: walk dogs, attend school and work, deal with kids, mortgages, dinner, and shop in bricks and mortar stores. We were largely invulnerable to the internet and its self-serving attempts at seduction.

Enter smartphones and apps.

Now, for all of the convenience of email on-the-go and Google maps, we must carry around a pocket pest hellbent on “helpfully reminding” us to book a flight, go on a run, take advantage of an offer, read a story, or check out a status update. With a smartphone, it doesn’t matter where we are (and be sure, it knows where we are), because we can be prompted and nudged regardless. In North America alone, marketers send out over 671 million push notifications each day, all of which are designed to “pull users back in” to the app and sell to them.

FOMO marketing

A sales explanation of “fear of missing out” or FOMO marketing for e-commerce sites.

To use theatrical language: smartphones came to “break the fourth wall.” Now that it is broken, anything tech companies can do to either elbow their way into your life – or draw you into theirs – is game on.

Feeling down? Come and unload to a chatbot therapist. Want to turn the lights off and watch a movie? Get a virtual assistant for your coffee table and they can do it for you. AI is becoming ubiquitous. And with every interaction we teach it a little more about who we are (both as people, and as a species), which in turn helps it to fine-tune those nudges and make more money from us, and from upcoming generations who are likely to be addicted to the hyper-convenience of letting intelligent systems guide human choices.

From the first stages of the internet, to the interactive Web 2.0, to smartphone apps, and now virtual assistants like Alexa and Google Home: the trajectory is a clear one. At each step, we have become more reliant, we have divulged more of ourselves, and we have made more space in our lives for artificial intelligence.

Moreover, at every one of these stages we have also allowed our gateway to the internet – and our online choice – to be narrowed. First by nudges, then (added to this) through apps which encourage our affiliation to a smaller number of companies and brands, and now to particular brand “characters” – be it Alexa, Cortana, Siri, Bixby, or Google Home – all of which present the world to us through their own specific, self-interested prisms.

As Will Oremus says, writing for Slate:

“Whoever controls them [virtual assistants] may exert even more influence over users’ choices than Apple or Google did via iOS and Android. That’s because voice interfaces don’t lend themselves to choosing among various apps or scrolling through lists of options when you want to buy or watch something. So the company that makes the software gets to decide who sells you something, who plays your streaming music, or whose videos you watch by default.”

This sounds a little less nudge, and a little more shove.

Nevertheless, virtual assistants are unquestionably here to say. Those in the know are already predicting that we will interact less and less with web browsers and smartphone apps, and more and more with voice activated technologies and wearables.

They are not, however, the only technology likely to dominate our futures. Facebook for one, is betting that we will also spent much of our time interacting with – and via – virtual reality.

From job training, to interior design, medical interventions, travel, and socializing. Virtual reality (and its plucky cousin augmented reality) is likely to become part of the lives of near-future generations. If this is the case – and Zuckerberg is certain of it – not only have the tech companies broken down the fourth wall, they will somehow have found a way of constructing whole parts of our lived experience: they will be all of the walls.

No more interrupting or imposing. VR allows marketers to literally design our environments to their own advantage and have it evolve organically in response to our eye movements and other mappable body expressions.

Techniques like nudge could hardly become more potent than within a virtual reality setting.

This is not mere speculation. Work has already been done by psychologists that establishes the astonishing impact that VR marketing can have upon consumers old and young. A study from researchers at Stanford University recounts how children have trouble even distinguishing virtual reality experiences from their real-life experiences. Could this mean that virtual items could increasingly become desirable things of (real) value?

What’s more, a series of experiments with adults showed that VR enables a very powerful type of “self-endorsing” – where potential consumers are presented with a portrayal of themselves using marketed products – which leads to a more favorable view of brands and a much higher rate of purchase intention.

In short, where virtual assistants can narrow our choices by controlling the default option, VR will be able to frame products and services in such a way as to make them considerably more appealing to us on a subconscious level, as well as inescapably prominent.

Of course, nudging by default or by framing is already possible on the internet, but AI developments will make them more pervasive. We will still be able to refuse nudges, make our own choices, and “walk away”, but new technology is making it less likely that we will. Especially when it comes to generations being born right now who will know no different.

There are obviously questions about autonomy and control that need to be fully understood here. To what extent is our rationality is being compromised? To what extent is it okay for companies to cultivate non-rational choices? After all, sometimes they can be to convenient, and therefore to our benefit (e.g. encouragement to exercise or go to the doctors, or a useful product recommendation at the right time).

Many would argue that to ethically appraise these practices we have to establish whose interests each “nudge” is in. The consumer or the company? This is never a question we’ve had to ask before of marketers because it was always so obvious: a company puts a pretty looking picture of a soda on a billboard or a TV advert, and I may be persuaded to drink it or I might choose to buy another brand. Importantly, I know what they’re trying to do, and I mainly understand the way in which they’re doing it. This new emphasis on nudges and other digital trickery changes these bygone rules of engagement, and that is why they must be reassessed.

It’s so easy to play the upper-hand when you already have it. Real efforts need to be made to re-level the playing field before it’s too late, and all of our personal decisions become the playthings of four or five Silicon Valley overlords.

FEATURED MEDIA: “Letting Facebook control AI regulation is like letting the NRA control gun laws” – Quartz

Mark Zuckerberg

Writing for Quartz, international dispute lawyer, Jacob Turner, elaborates on the dangers of letting Silicon Valley execs set their own rules:

“We wouldn’t trust a doctor employed by a tobacco company. We wouldn’t let the automobile industry set vehicle-emissions limits. We wouldn’t want an arms maker to write the rules of warfare. But right now, we are letting tech companies shape the ethical development of AI.”

Read the whole article here: Letting Facebook control AI regulation is like letting the NRA control gun laws.

If you aren’t paying, are your kids the product?

There’s a phrase – from where I don’t know – which says: “If you aren’t paying, you’re the product.”  Never has this felt truer than in the context of social media. Particularly Facebook, with its fan-pages and features, games and gizmos, plus never-ending updates and improvements. Who is paying for this, if not you…and what are they getting in return? The answer is actually quite straightforward.


Though it may have escaped most of our notices, Facebook is now little more than an extremely sophisticated marketing platform. Having spent years encouraging our online network-building, it now boasts an unrivalled – and unprecedented – audience of nearly 2 billion. And that’s not all; naturally, it also has access to all of these peoples’ personal tastes and preferences, educational and work history, their entire social circles (including details of their familial connections), their demographic data (age, sex, race, etc.), their geographic location (and the locations they have and plan to visit), their physical image, their hobbies, their current state of mind, their business plans and aspirations – and no doubt a number of other things I’m not clever enough to even think of.

In short, Facebook and its algorithms know more about us than we know about ourselves, and if you happen to be a business looking to sell something to the masses, you could do a lot worse than to cross its tiny blue palm with gold.

Now, 2 billion is A LOT of people. It’s more people than were alive just a hundred years ago in 1917. And given other big markets like Russia and China are hostile to these American invaders (they prefer VKontakte (known as VK) & Odnoklassniki and QZone, respectively) we could speculate that Facebook is nearing saturation point. After all, how many persuadables are left? These days our parents, our grandparents, even our pets are on aboard the Facebook fun bus.

Fear not, however. Just as Mr. Zuckerberg must have been tearing-up for having no-more worlds to conquer, the Silicon Valley brain trust have come good. The answer was three feet beneath their noses the whole time: kids, of course!

Think about it. Kids – I hate to say it – are the greediest little consumers of all. They talk about nothing but what they want, need, like, and absolutely can’t stand. Their social media utterances would be manna from heaven to machine learning algorithms looking to identify the sources of desire.

But there’s a problem. People tend to be precious about their children, and they’re usually keen to keep them away from the black hole of unknowns that is social media. The idea of a six-year-old accepting friend requests from unknown shadowy characters, or stumbling across a crude meme that puns on cats and female genitalia, is something quite repellent for most conscientious parents. That’s why, until now, Facebook has been locked to pre-teens.

Enter, Messenger Kids. A shiny new Facebook app which covers-off each and every parental paranoia (which mostly revolve around their small children being drawn into conversations with middle-aged men named Barry…). Indeed, Facebook is at considerable pains to emphasize safety – which I’m not, even for a second, suggesting is a bad thing. I’m simply saying that it shouldn’t be the only concern of those who have reason to be concerned.

Let’s look at the good here. Firstly, Barry is unlikely to get anywhere near your 6-12-year-old kids on this thing. Parents must approve each of the child’s friendships. Secondly, they won’t see any of the weird and wonderful images that circulate on regular Facebook. There’s a library of approved gifs and jpegs, all of which have been deemed child-appropriate. Thirdly, Facebook won’t use information from kid’s conversations to retarget parents. So, you will not end-up with a newsfeed full of cuddly toys and Frozen merchandize – Mark has been merciful in this respect at least.

Lastly, this is not pre-training for “grown-up” Facebook. Once children hit 13, they will not automatically graduate to the full horrors of the adult version. If they do, as with the rest of us, it will be their choice. Facebook is not making any assumptions (and it won’t have each child’s exact age anyway).

So, what’s the problem? Well there may not be one, but there are certainly good reasons to be skeptical about this new spin-off.

Perhaps the most obvious concern, is that we don’t really know what Facebook does to adult’s brains, never mind those of impressionable children. Lots of anxieties have been expressed lately about the ways in which social media plays upon our cognitive biases, encourages our addictions, and constantly, incessantly calls for our attention. Not least because of those “like” buttons, which will be included on Messenger Kids to introduce pre-teens to the oppressive peer validation we’ve all become slaves to.

Critically, these concerned voices aren’t limited to consumer groups and privacy campaigners – many of them are former internet practitioners who are worried about what the future holds. Ex-Google strategist, James Williams, has described the persuasive algorithmic techniques of familiar mechanisms like Google search results and the Facebook newsfeed as, “the largest, most standardized and most centralized form of attentional control in human history.”

Many might rather their offspring weren’t the guinea pigs for the kid’s version of this mind manipulation, given we adults are now so utterly obsessed that we reportedly touch our phones no fewer than 2,617 time per day…

Another issue – which admittedly may just be my issue – is that the reason for giving in and letting children use Messenger Kids it is a bit lame. The BBC says that, “the prevailing mood is that since kids are using social networks, you might as well do what you can to make sure that use is safe and monitored.” This is the internet equivalent of that friend’s mum who lets all the teenagers drink at their house so they can (supposedly) monitor and contain the carnage. It’s not that this approach is necessarily wrong, it’s just that it’s weak. It’s not a stance. Is it okay for pre-teens to use social media or not? The ambiguity leaves us none-the-wiser.

Finally, the last problem I see takes us back to where we started: Facebook as first-class marketeers.  It comes down to this:

“Messenger Kids will of course collect data: the child’s name, the content of the messages, and typical usage reports for how the app is used. Facebook will share that information with third parties, which must have data protection policies that comply with Coppa, the Children’s Online Privacy Protection Act in the US.” (BBC)

Facebook will be harvesting, aggregating, analyzing, and selling, just as they do as part of their normal business. Only, apparently, less.

What your children say, do, want and like will be forensically picked through by commercial parties. It will be used to launch strategies, to push products, to find whatever weak underbellies are still unexposed. That sweet, strange, inventive gobbledygook by which children generally communicate will be trawled through by algorithms employed by ad-men desperate to boost their bottom-line with some precious insight pertaining to the next generation. In short – if they possibly can be – childhood ramblings will be commodified as a direct result of Messenger Kids. Otherwise, what’s the point?

According to CNN, the launch of this new app is part of a Facebook strategy to grow its user base. This is not because Mark Zuckerberg needs more friends, or because he loves to see the daisy-chains of human connections blossom. It is because there is money to be made, and your child’s data is the oil at the bottom of a drying well of new opportunity. They are, “like” it or not, the product.

6 Tech Terms Every Adult Should Learn About To Avoid Being Left Behind


Not for the first time, Apple CEO Tim Cook has spoken out this week about how important it is for children to learn computer code. He’s not alone in believing that this “language of the future” will be critical for kids growing up right now. In a sea of unknowns one thing appears to be certain: technical understanding is a very valuable asset indeed.

It’s interesting then, that in spite of remarkable efforts to equip the adults of tomorrow with such skills, very little is being done to familiarize young adults, middle-aged parents, or retirees (with impressively long-life expectancies!) with the signature terms of the “AI Age”. This seems a like an oversight.

Are those of us beyond our mid-teens now lost causes? Will we soon be completely reliant on post-Millennials to guide us through life? What about those born in the 90s and early 00s – might they be outmoded before they can hit-their-stride? Hopefully not.

Getting to know technology needn’t involve intimidating language, nor require mathematical skill. Often just having an overview of the basics can empower, as well as inspire us to try to understand more. Here are just six as a place to start:

  1. Artificial Intelligence

You can’t go far these days without hearing about artificial intelligence or “AI” as it is often known. Confusingly, what AI refers to can vary quite a lot, and sometimes its scope is contested. Perhaps the easiest way to think of artificial intelligence is in contrast to natural (aka human) intelligence: we use the term AI to describe when a machine can mimic one or more of the functions of our brain (e.g. problem solving). Usually conversations about artificial intelligence refer to “weak” or “narrow” AI, whereby a system is focused on a single task. Sometimes, however, there is speculation about the eventual move to “strong” or “true” AI, which is concerned with machines that can perform all tasks at least as well as the human brain. At present, scientists are still far from developing anything approaching “true AI” – also known as artificial general intelligence (AGI) –  despite its popularity as a media topic. Examples of artificial intelligence: Siri, driverless cars, Spotify.

  1. Big Data

Big Data has recently been surpassed by AI as the main tech buzzword, and many argue that we should really drop the “big”. This gives a clue as to what it is: data. There are really just two important things to know when it comes to Big Data – where it comes from, and what it is used for. Big data comes from a huge variety of (often relatively new) sources, many of which are “human centric”. This means we all “omit” data as part of our so-called “digital footprint”, i.e. our interactions online, Google searches, energy usage, texts and emails, purchases, smart phones/connected devices, customer loyalty cards, travel passes, posted pictures, app usage, social media, etc. All the data is harvested by companies and governments who are interested in recognizing patterns in order to make predictions and/or well-informed judgments. Importantly, because the information is so dense, they employ artificial intelligence which uses the data as “fuel” to make quick and accurate deductions with regards to its assigned task (see above). The AI may then suggest a course of action, or even act upon these data-fueled judgments itself (consider a driverless car). Many AI systems can also use Big Data to learn, and improve the accuracy of their task execution; this is known as “machine learning”, which is a branch of artificial intelligence. Examples of Big Data: Twitter feed, Fitbit data, data from connected cars.  

  1. Algorithm

Algorithm is hardly a new term, but it has recently received a lot of attention. It is usually described as a process or set of instructions to be followed when conducting calculations. It can be helpful to think of it as a kind of recipe that drives towards at a successful solution, e.g. a cake. In a computational setting, the algorithm enables the artificial intelligence to complete a task methodically. But the recipe analogy is not comprehensive. In machine learning (mentioned above), vast datasets are used to train algorithmic models, which are then used to predict things based on new information. For example, big banks of email data might be used to train a model to algorithmically determine which emails are legitimate and which are spam. Once the model is trained to detect spam accurately using specific features of an email, it can be turned loose on new data (i.e. all subsequent emails). There is concern at the moment  about “algorithmic bias” and “algorithmic accountability” because often the datasets used to train algorithmic models can contain biases, or can be unrepresentative when the information relates to members of society. Examples of algorithms:  any step-by-step process that helps accomplish a task.

  1. Internet of Things

Also known as “IoT”, the Internet of Things is the term used to describe the vast network of connected devices that exist in our physical world. These items usually give off and utilize data for their functions. For example, a Fitbit-style device might tell data gatherers what my heart rate is whilst I’m running. They can then aggregate this with data from other users to determine an average or ideal heart rate, which, in turn, the device can judge my heart rate against (like a feedback loop!). Experts say that there will be as many as 30 billion IoT devices by 2020, including smartphones, connected cars, washers/dryers, heart implants, robotic vacuums, wearables, animal chips, smart bridges/infrastructure, etc. All these things can be sensed or controlled remotely. Examples of IoT devices: Amazon Echo, Fitbit, driverless cars.

  1. Bots

Many of us will already know – roughly – what bots are. They are a type of automated script or AI that can mimic human behaviors on the internet. There are literally millions of them. It’s easy to talk past one another when discussing bots, however, as they can mean different things to different people. Here’s a brief (and non-exhaustive) overview of the bots we may come across:

  • Chatbots: designed to conduct conversations with humans. Often used by brands to interact with customers.
  • Transactional bots: can act on behalf of humans (e.g. collecting data/checking records).
  • Informational bots: push out useful information, like breaking news stories on social media.
  • Entertainment bots: can play games or invent amusing stories.
  • Hacker bots: attack websites and sometimes networks. Distribute malware.
  • Scraper bots: steal and publish content from other sites.
  • Spam bots: place low quality promotional material on the internet and drive traffic to spammers website.
  • Impersonators or social bots: mimic natural human characteristics on social media. Can push out propaganda and be used to sway public opinion.

Examples of bots: Tay, @newsycombinator, Peñabots

  1. Augmented Reality

Last but not least, you’ve no doubt heard of virtual reality (VR), but do you know of its cooler cousin, augmented reality (AR)? Think Pokémon GO. The uses of this kind of graphical overlay, however, are multiple, and it’s gaining in popularity. Unlike virtual reality, augmented reality doesn’t require bulky, restrictive (and expensive) headgear; many apps are already available on your smartphone. For example, the Blippar app (which uses AI) lets you discover more about your environment simply by positioning your camera. It does what the name suggests: it augments the real world with images, information, sounds, video, or GPS information. It does not replace the real world, unlike virtual reality. Although it is still in its infancy, augmented reality could be used in entertainment, education, construction, visual arts, retail and medical, to name but a few. Examples of augmented reality: IKEA Home Planner, Pokémon GO, Blippar.

This is not an exhaustive list, and nor is its contents comprehensive (or flawless, for that matter). It’s intended as a shareable guide for adults – many of whom feel as though they are falling behind in a world where data scientists and marketers are beginning to run rings around them in an increasingly unedifying way.

Bots may be determining all our futures

social bots

We’ve all seen the stories and allegations of Russian bots manipulating the Trump-Clinton US election and, most recently, the FCC debate on net neutrality. Yet far from such high stakes arenas, there’s good reason to believe these automated pests are also contaminating data used by firms and governments to understand who we (the humans) are, as well as what we like and need with regard to a broad range of things…

Let me explain.

Social bots (which is what we’re talking about here, “bot” is a catch-all term for many different types of AI) can be a nuisance for social media platforms. A recent report has estimated as many as 48 million Twitter accounts are actually bots, and between them they are responsible for as many as 1 in 4 tweets. Depressingly for Taylor Swift fans, a study in 2015 revealed that 67% of her followers were – you guessed it – bots, and a new study from the University of Cambridge revealed that celebrities with more than 10 million followers behave in bot-like ways themselves. Like it or not, everywhere you turn on social media, you are likely to be confronted by automated accounts. Many of them are highly sophisticated when it comes to impersonating human interactions using natural language, and they can even replicate real-life human networks.

So why does this matter? The answer to this is really two-fold. The first is well-reported in the context of politics. These bots are deceptive, and specifically designed to “present” as real people. This means they have regular names, hobbies, ages and affiliations. They are relatable, and as such they can influence real users. They are rented, not just by governments but also by big businesses looking to create hype, and they’re deployed in the knowledge that we humans are susceptible to band-wagons. Consequently, they can create or mask real public sentiment, and this means that whoever programs and operates them can wield a lot of power.

The second problem is rather more subtle: bots can badly distort the social data that is used to make predictions and assumptions about human behavior. In other words, they make social media less reflective of “real life”, and real people.  This is significant for companies participating in social listening, data mining or sentiment analysis. Researchers at Networked Insight found that nearly 10% of the social media posts brands analyze to understand their consumer’s behavior do not come from real users. It is significant for us because, where this analysis fuels “nudge” techniques and causes brands to shepherd us toward particular options (which happens even when we aren’t conscious of it), this is being carried out based on “insight” muddied by artificial voices.

Internet “trends” are often scaled-up and relayed as fact by those who seek to analyze (and capitalize on) our every online movement. Where sentiment has been warped by bots this could lead brands and governments to mistakenly lead us away (en masse) from what we actually want or need, stifling the will of the public. And there’s an additional harm here: if an individual and/or societal group detects the way that they are being categorized is contrary to their preferences, there’s a good chance they’ll make efforts to modify their behavior in ways that could be unhealthy for them (or at least, not preferable).

The social media giants are not stood still on this, they are hard at work bot-busting, and at the same time data users are trying to “clean” their bounty as best they can. Nevertheless, bot-makers are good at adapting and evolving the qualities that make their AI undetectable. In response, Germany has plans to introduce a compulsory labelling system for posts from automated accounts, yet given that many bot users are “rogue” anyway, it’s likely that such rules will be flouted. Consequently, citizens, small businesses, and members of civil society must be aware of bots’ ability to both steer and infect the “truths” of the masses, which cannot be taken at face value – and they also need to know just how to proceed with appropriate caution…

All teens make mistakes, but hyperconnected Generation Z faces steeper consequences

Teenage Young Teen Youth Portrait Tween Casual

Last week a young contestant on a British reality TV show was left humiliated after producers chose to remove him from the program’s Australian jungle setting after just a couple of days. Their reason? Tweets and social media messages sent in 2011, when the vlogger was in his teens.

Now let’s be clear, the things that Jack Maynard said were unpalatable and offensive. They are not acceptable sentiments in any scenario, and certainly not from someone with a YouTube reach of several million and an incredible leverage over (predominantly) teenage girls.  Nevertheless, watching a young man’s fledgling media career left in tatters should prompt us to sharpen our focus on an increasingly important question: in our hyperconnected era, to what extent can we punish and pillory the adult for the sins of the teen?  

It is worth remembering that, traditionally, we give naughty children and tear-away teens some extra slack, and in many progressive jurisdictions under 18s are treated more leniently by the law. In the UK, custodial sentences are a last resort for those aged between 10-17, more heed is paid to the fact that deviant acts may be a “phase”, and courts are advised to avoid “criminalizing” young people. Critically, decisions about the poor behavior of young people acknowledge that they can be naïve, and judgments strive to contain damaging ramifications that could prevent the individual blossoming into a perfectly decent and productive member of society.

This makes sense to most of us. After all, which one of us managed to progress to adulthood without saying or doing something they’re deeply embarrassed or ashamed of? How many of those things can we even remember, given our tendency to don rose-colored spectacles and reimagine an untainted edit of our “playful” youth?  Those of us who did most of their maturing in the pre-internet era at least have that privilege. The nasty or strange things we said and did were verbal, or written on pieces of paper that have long since disappeared. Those born into Generation Z (or just before) – like Jack Maynard – are doomed to be held responsible as adults for the same indiscretions.

I am proposing that if we are willing to forgive young people for the occasional faux pas when they are learning their place in the world (and I’m assuming we are, provided it is not something grave or unsettling), then we should ensure that this forgiveness carries forward into adulthood. If a 15 or 16-year-old boy asks a member of his peer group for nude photographs, we should remember that these are the actions of the boy, not the 23-year-old man who stands before us when the behavior comes to light.

If this seems trivial, then we should consider that young people now are constantly connected. Most of their hopes, dreams, likes, dislikes, and many pictures of their bodies are already laid out on the internet where they will remain forever. All of their off-hand comments, forays into swearing, their use of politically incorrect language; in short, all of their juvenile idiocy is already documented. For many it could come back and bite them very hard on the behind.

So, what’s the solution? Well I can think of two, though it’s becoming clear that first is rather futile; that is, to remind teens that for all its brilliance, social media and the internet is very exposing. It takes notes, and the way we all behave now will echo forever in cyberspace. For some, this may be affective. A fear of the internet could well function like a fear in god, and Generation Z might blossom into paragons of online virtue. It is probably worth a try, and schools and parents are already at it.

My second thought concerns all of us. Together we are society, and we get to decide how much emphasis we lay on misdemeanors from long ago. Collectively, we can choose to ignore the ugly parts of a young person’s social snail trail – and that would be my personal preference. Just as our non-criminal sins evaporated behind us, so too should we ensure that current and future generations have the same “wriggle room” to muck up and get away with it. Otherwise the chilling effect may cripple their development in ways we are yet to understand.

Everyone should be allowed to make non-grievous mistakes. Sometimes those things will be uncomfortable for us to accept, but we don’t have to accept the acts themselves. All we have to accept is that young people – especially teenagers – don’t get everything right first time, and from there we can, in good conscience, intentionally disregard social media errors made by those who did not have the rights of adulthood at the time.

Governments should consult with AI when taking decisions: Five good reasons


Artificial Intelligence is becoming ever more sophisticated in its deductions. This has caused many to consider its role in the governance of countries, states, cities, and towns. I believe there’s a strong case to make when it comes to its integration into politics and power. Here’s why:

  1. Human decision-making can be badly flawed

We are not as rational as we take ourselves to be. Our judgments are regularly compromised by cognitive biases which can unfairly influence the way in which we deliberate. We think that good looking people give more compelling testimony, we are liable to fall into “group think” in meetings, and our political biases skew how we calculate the benefits and risks of new initiatives (Daniel Kahneman, Thinking, Fast and Slow). Artificially intelligent systems are uncompromised by these, and the plethora of subjective, conscious experiences which mislead reliable, reasoned decision-taking.

  1. Algorithms are more accurate in unpredictable environments

Back in 1954, the psychologist Paul Meehl published a work in which he revealed that statistical algorithms consistently outperform the predictions of trained clinicians. Decades later, and after more than 200 re-runs of this type of study, no-one has convincingly contradicted this conclusion. It remains the case that in environments with high levels of unpredictability – like politics and governance – formulas maximize accuracy above-and-beyond the intuitions of industry experts. This inconvenient truth has been widely ignored by those whose authority it challenges.

  1. Machines are fast and comprehensive

Artificial intelligence is extraordinarily fast, and this speed allows it to churn huge amounts of data. Thanks to “datafication” there is now more human-centric information than ever (social media, email/text data, energy use, travel info, geolocation/GPS, loyalty cards, Fitbit/device etc.). Undoubtedly, much of this is highly useful to those who govern; it gives critical, up-to-date (even real time) information about who we are, how we behave, what we like and what we need. Yet, without AI we can’t interrogate it. Human decision-takers have to ignore much of it as our comparatively slow processing tools cannot hope to keep pace without assistance. Non-adopters could easily be considered negligent governors in the future.

  1. Apocalyptic predictions shouldn’t obstruct reasonable use

Some critics are fearful of integrating artificial intelligence into public life due to longer-term fears about true or strong AI, i.e. machines with human-level intelligence that could become hostile or take their place as our despotic rulers. Skynet, basically. If it is even possible, this type of AI is a long way-off, and dystopian predictions should not be allowed to stand in the way of reasonable, deployable advancements in narrow AI: machines optimized for a very limited range of tasks. Though the latter can already outstrip human prediction in a number of areas, there’s no obvious way in which it might evolve into complex, generalized intelligence. Many experts are skeptical that it ever will.

  1. AI can reduce costs to citizens

Lastly, it should probably strike us as obvious that if judgments can be made more quickly and accurately, this will reduce costs. In terms of government departments, this might refer to admin and research costs. In politics, it could mean better prediction and foresight when making important decisions that have social costs attached.  There are two other likely bonuses; firstly, that more gets done when the mechanisms move more quickly, and second, human decision-makers can assign more time for deliberation where needed. As with most innovation, intelligent machines remove much of the labor so that humans can focus more sharply on purpose and value.

It is true that many governments and government departments already work with AI instruments, but their use is not ubiquitous or obligatory.  Though there are well-founded concerns about the use of algorithms, it makes sense to cautiously experiment with their use in environments that are currently dominated by (largely) flawed human-intuition.