Link to the complete video for Ethical Algorithms panel.
Chelsea Manning: Me personally, I think that we in technology have a responsibility to make our own decisions in the workplace – wherever that might be. And to communicate with each other, share notes, talk to each other, and really think – take a moment – and think about what you are doing. What are you doing? Are you helping? Are you harming things? Is it worth it? Is this really what you want to be doing? Are deadlines being prioritized over – good results? Should we do something? I certainly made a decision in my own life to do something. It’s going to be different for every person. But you really need to make your own decision as to what to do, and you don’t have to act individually.
Caroline Sinders: Even if you feel like a cog in the machine, as a technologist, you aren’t. There are a lot of people like you trying to protest the systems you’re in. Especially in the past year, we’ve heard rumors of widespread groups and meetings of people inside of Facebook, inside of Google, really talking about the ramifications of the U.S. Presidential election, of questioning, “how did this happen inside these platforms?” – of wanting there even to be accountability inside of their own companies. I think it’s really important for us to think about that for a second. That that’s happening right now. That people are starting to organize. That they are starting to ask questions.
Kristen Lum: There are a lot of models now predicting whether an individual will be re-arrested in the future. Here’s a question: What counts as a “re-arrest?” Say someone fails to appear for court and a bench warrant is issued, and then they are arrested. Should that count? So I don’t see a whole lot of conversation about this data munging.
Read the whole thing here. Watch the whole video here.
Technology Track – Ethical Algorithms 2:00 – 2:45 pm – Ethical Algorithms Panel – w/Q and A. Kristian Lum (Human Rights Data Analysis Group – HRDAG) As the Lead Statistician at HRDAG, Kristian’s research focus has been on furthering HRDAG’s statistical methodology (population estimation or multiple systems estimation—with a particular emphasis on Bayesian methods and model averaging). Caroline Sinders (Wikimedia Foundation) – Caroline uses machine learning to address online harassment at Wikimedia, and before that, she helped design and market IBM’s Watson. Caroline was also just named as one of Forbes’ “8 AI Designers You Need to Know.”Plus Special guests TBA
About the Ethical Algorithms Panel and Technology Track by Lisa Rein, Co-founder, Aaron Swartz Day
I created this track based on my phone conversations with Chelsea Manning on this topic.
Chelsea was an Intelligence Analyst for the Army and used algorithms in the day to day duties of her job. She and I have been discussing algorithms, and their ethical implications, since the very first day we spoke on the phone, back in October 2015.
“The consequences of our being subjected to constant algorithmic scrutiny are often unclear… algorithms are already analyzing social media habits, determining credit worthiness, deciding which job candidates get called in for an interview and judging whether criminal defendants should be released on bail. Other machine-learning systems use automated facial analysis to detect and track emotions, or claim the ability to predict whether someone will become a criminal based only on their facial features. These systems leave no room for humanity, yet they define our daily lives.”
A few weeks later, in December, I went to the Human Rights Data Analysis Group (HRDAG) holiday party, and met HRDAG’s Executive Director, Megan Price. She explained a great deal to me about the predictive software used by the Chicago police, and how it was predicting crime in the wrong neighborhoods based on the biased data it was getting from meatspace. Meaning, the data itself was “good” in that it was accurate, but unfortunately, the actual less-than-desirable behavior by the Chicago PD was being used as a guide for sending officers out into the field. Basically the existing bad behavior of the Chicago PD was being used to assign future behavior.
This came as a revelation to me. Here we have a chance to stop the cycle of bad behavior, by using technology to predict where the next real crime may occur, but instead, we have chosen to memorialize the faulty techniques used in the past into software, to be used forever.
I have gradually come to understand that, although these algorithms are being used in all aspects of our lives, it is not often clear how or why they are working. Now, it has become clear that they can develop their own biases, based on the data they have been given to “learn” from. Often the origin of that “learning data” is not shared with the public.
I’m not saying that we have to understand how exactly every useful algorithm works; which I understand would be next to impossible, but I’m not sure a completely “black box” approach is best at least when the public, public data, and public safety are involved. (Thomas Hargrove’s Murder Accountability Project‘s “open” database is one example of a transparent approach that seems to be doing good things.)
There also appears to be a disconnect with law enforcement, while some precincts seem to be content to rely on on technology for direction, for better or worse, such as the predictive software used by the Chicago Police Department. In other situations, such Thomas Hargrove’s, “Murder Accountability Project” (featured in the article Murder He Calculated) technologists are having a hard time getting law enforcement to take these tools seriously. Even when these tools appear to have the potential to find killers, there appear to be numerous invisible hurdles in the way of any kind of a timely implementation. Even for these “life and death” cases, Hargrove has had a very hard time getting anyone to listen to him.
So, how do we convince law enforcement to do more with some data while we are, at the same time, concerned about the oversharing other forms of public data?
I find myself wondering what can even be done, if simple requests such as “make the NCIC database’s data for unsolved killings searchable” seem to be falling on deaf ears.
I am hoping to have some actual action items that can be followed up on in the months to come, as a result of this panel.
Saturday, November 4, 2017 2:00 – 2:45 pm – Ethical Algorithms Panel – w/Q and A. Kristian Lum (Human Rights Data Analysis Group – HRDAG) As the Lead Statistician at HRDAG, Kristian’s research focus has been on furthering HRDAG’s statistical methodology (population estimation or multiple systems estimation—with a particular emphasis on Bayesian methods and model averaging). Caroline Sinders (Wikimedia Foundation) – Caroline uses machine learning to address online harassment at Wikimedia, and before that, she helped design and market IBM’s Watson. Caroline was also just named as one of Forbes’ “8 AI Designers You Need to Know.” Plus Special guests TBA