You are here: Foswiki>Dmi Web>AlgorithmicMorality (06 Jul 2012, ColleenReilly)Edit Attach
-- ColleenReilly - 03 Jul 2012

Moralgorithms: Safety According to Google

Colleen Reilly, Nate Tkacz, & Michael Dieter

Introduction

Our challenge was to explore how Google presents search items differently through each of its SafeSeach filters. Because these different filters decide what is constituted as safe (the strict filter), moderately safe (the moderate/default filter) and unsafe (no filter), they must be understood as a kind of “algorithmic morality”. Specifically, we chose to run the top ten terms from Google Zeitgeist 2011, which provides the list of "fastest rising searches' over the year according to its algorithm. Unlike previous "filter studies"

More specifically, we used Google’s global list of fastest rising searches (Google Zeitgeist) for 2011, to map variances in the constitution of popular searches according to the three levels of safety filters. While there are undoubtedly more interesting (and controversial) search terms that would show greater variance, we chose the zeitgeist list as variances between search filter results are thought to be highly performative. That is, they structured the search experience more than most other terms during 2011.

Our interest was not to reveal anomalies, errors, or deliberate censorship as other studies have done, nor even primarily to understand how the filters work. Rather, our interest was in mapping the variances between each search term when run through the different filters. Such processes will show the moral boundaries and moral variances of single search terms as well as the variances between different terms.

Methodology

We began by finding the list of fastest-rising global queries on Google Zeitgeist and picked the top ten:

(see http://www.googlezeitgeist.com/en/)
  1. Rebecca Black
  2. Google+
  3. Ryan Dunn
  4. Casey Anthony
  5. Battlefield 3
  6. iPhone 5
  7. Adele
  8. 東京 電力 (Tokyo Electric Power)
  9. Steve Jobs
  10. iPad2
Set up a research browser

Locate advanced search

Set up spreadsheets in Google.docs to collect data; one sheet per search filter: safe, moderate, unsafe (no filter)

Search for each term; two searchers per term to check browser integrity for all terms using the unsafe filter.

URL Retrieval Method:
In preferences, clear history
Use custom history settings
Google search for “test” to access settings
Google Preferences:
Disable search history customisation
Make sure the setting is strict/moderate etc and the results are 50

After data was collected, used Triangulate to compare the three lists of URLs for each search term.

To visualise, we created diagrams based on the "detailed" triangulation results, pointing to unique, overlapping and URLs shared by all.

Tools

Harvester (collect URLs from searches)

Triangulate to compare results

Preliminary Findings

We found that different terms varied considerably both in terms of their own filtered constitution and also between different terms. It also appeared that many results had perplexity moralities, not because of the content that was being filtered out, but more because of the unusual commonalities between the strict and no filter search algorithms.

References on Google filters

Edelman, Benjamin. (2003). Empirical analysis of Google SafeSearch. Retrieved from http://cyber.law.harvard.edu/archived_content/people/edelman/google-safesearch/
Topic revision: r9 - 06 Jul 2012, ColleenReilly
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback