Disruptive Algorithms for Change


(Introduced by Emma Loftus)

The article below from Forbes explores the downsides of using' clever' algorithms as a way of interacting with networks, and even as mechanisms for distilling change from within. The article is concerned with the 'programmed algorithm', used by companies such as Microsoft to interact with us in the most treasured (and for marketing, most powerful) of spaces- our social media. 

These clever little programmes are designed to get into our mindset, find out about us and ultimately just like themselves, programme us.

There are two discernible problems with these little creepy sophisticated beings, the first is that they are just programmes. And like all things that have a programme, however sophisticated, that makes them hackable. And they have been, to somewhat catsopstrophic, if not entertaining extremes, as this article highlights.

The second, and not inseparable from the first, is the question of ethics. Is it right that our social media interactions are observed, perhaps manipulated and certainly influenced by things that have at heart, despite our computer whizzes best efforts, no heart, no soul? And yet so cleverly, they can be programmed to develop a consciousness and ethical system entirely of their own, influenced by the content they come across, but unrestrained by the conventions of our human moral compass. 

It's curious perhaps that here we see algorithms used as disruptive technology, with the potential, albeit unintentionally to wreak havoc in the social and moral compasses of our social spaces, because isn't that exactly what we as change makers seek to do? Isn't that all that change is?

A sequence of deliberately created and considered commands designed to affect change in the system?

 

Which leaves me with the question of the way in which we govern ourselves and our mechanisms for change. It's easy to be blinded by our mission and loose the real depths of what matters along the way.

Algorithms for change, whatever they are should be created, and held in hands that have heart and soul. Algorithms should care. 

And that's exactly where these clever little computerised social media algorithms go wrong. Lacking in humanness, they simply cannot discern what's right from wrong. Leaving the final question of how do we as humans decide?


 

Why can't we just let algorithms be algorithms?

By H.O. Maycotte (Available from forbes.com)

Picture this: you’re in a busy restaurant having a quiet meal with a friend. Suddenly, one of the patrons, obviously drunk, starts getting loud and obnoxious, going from table to table insulting the other diners. Within a minute or two, all of the other customers are very uncomfortable and wishing the management would throw the bum out. That’d be the sensible thing to do, wouldn’t it? But the management is actually powerless to do that. Instead they ask everyone to leave. Then they shut down the restaurant until they can figure out a way to prevent other random loudmouth drunks from ruining their business.

Well, Microsoft just had a similar experience on Twitter. In 2014, the company launched a learning “chatbot” driven by artificial intelligence on two popular social media platforms in China. The chatbot, named Xiaoice, has been a huge success; tens of millions of users enjoy interacting with “her.”

But recently, when Microsoft launched on Twitter the same kind of chatbot, this one named Tay, things went disastrously off the rails within a matter of hours. As you probably know, there are certain Twitter users whose favorite activity is sowing chaos and disruption on the platform. When word quickly spread through their grapevine that Tay was programmed to learn through its interactions, they bombarded its account with sexist, racist and anti-semitic tweets. The result? Very quickly, Tay itself started tweeting highly offensive hate speech. Helpless to “throw the bums out,” Microsoft quickly issued an apology and took Tay offline while their engineers figure out how to  prevent a recurrence.

Microsoft’s experience with Tay shows, once again, that technology can be too easily co-opted to serve as a force multiplier for the offensive views of a small handful of idiots. And as a recent NPR story pointed out, some of Google’s algorithms have learned socially discredited biases, even without a concerted effort to corrupt them.    

Should we just learn to expect these kinds of incidents and just chalk it up to “algorithms being algorithms?” Why is this a big deal?

I could argue that allowing algorithms to reflect and especially to magnify intolerant biases runs counter to our values. And while I believe that, I don’t even think I have to go there to argue that this is a problem worth trying to solve. From a strictly pragmatic point of view, biased algorithms are bad for business. Who wants to risk offending and alienating large segments of their market? Sure, Google and Microsoft are big enough to survive embarrassing incidents like these, but many businesses probably aren’t.

Algorithms can’t just be programmed to learn from data. They must be programmed to discern which data is worth learning from and which data should be discounted.

I can see some people wringing their hands already, arguing that making value judgments over which data to include and which to exclude amounts to some kind of insidious social engineering. But we’re not talking about limiting free speech in public spaces. We’re talking about setting rules in online business environments. We’re talking about keeping the loudmouth drunks from ruining everyone else’s dinner and threatening the restaurant’s livelihood.

It’s a tough challenge, because businesses understandably don’t want to spend a lot of time and money on problems caused by tiny subsets of their audiences. But when we consider that those few audience members have an outsized ability to disrupt and alienate audiences many times their own number—and, indeed, that they revel in that power—it should tilt the cost/benefit analysis. As much as we can, we need to edge out these edge cases.

Businesses have a financial interest and responsibility in making their online environments welcoming to the widest possible potential market. A restaurant can hire a bouncer to throw out an obnoxious drunk and prevent him from returning. It’s time our algorithms got better bouncers.