Weed and Geotags: The Danger and Uselessness of Social Media Monitoring

In 2016, the Brennan Center for Justice identified 151 local and state law enforcement agencies in the United States that have subscribed to social media monitoring services such as Geofeedia, Media Sonar, Snaptrends, Dataminr, DigitalStakeout, and Babel Street. Following investigations from reporters and the ACLU, many of these services are now defunct or subject to API restrictions, a devastating blow for these surveillance projects. However, little is known about how these surveillance services identify social media posts of interest and who is impacted. Our research analyzes one such platform: DigitalStakeout.

It is informative what had been uncovered previously about social media monitoring. MediaSonar, used by the Fresno Police Department, encouraged police to track #BlackLivesMatter and related hashtags to identify “threats to public safety.” After it was revealed that MediaSonar marketed itself as a way for police to “avoid the warrant process,” Twitter cut off the company’s access to their enterprise API. Twitter also cut SnapTrends’ API access after the release of details of law enforcement use of their software; SnapTrends closed shop shortly thereafter. Geofeedia was notably used during the Freddie Gray uprisings to “arrest [protesters] directly from the crowd” aided by social media posts and face recognition technology; shortly after this revelation from the ACLU of Northern California, Facebook, Twitter and Instagram all revoked API access from Geofeedia.

Notably, DigitalStakeout is still in the business of monitoring social media for police. Here in Oregon, during a trial period of DigitalStakeout, an agent of the Oregon Department of Justice used DigitalStakeout to search for #BlackLivesMatter, discovered that an Oregon DOJ attorney was tweeting support and wrote a memo describing the posts as “possible threats towards law enforcement”—the agent who wrote the memo was later found to be in violation of state law. While the Oregon DOJ now has a policy of not subscribing to such software, the local police department in Corvallis, Oregon, home to my employer, Oregon State University, subscribed to DigitalStakeout starting in July 2016.

In partnership with the Civil Liberties Defense Center, I requested data from the Corvallis Police Department of their use of DigitalStakeout. The data (eventually) returned logs of automated searches, configured by DigitalStakeout—in total, 7240 links to public social media posts on Twitter, Instagram, Flickr, Facebook and Youtube. Sociology professor Brett Burkhardt, computer science doctoral student Alexandria LeClerc and I analyzed the Twitter data and reported our findings in this paper as part of the Conference on Fairness, Accountability, and Transparency. Let me summarize some findings from our paper, and some observations from the data that didn’t make it into our research paper:

  • DigitalStakeout only collects geotagged Tweets. To target the jurisdiction of the Corvallis Police Department, DigitalStakeout uses a geographical query to the Twitter API which only returns Tweets that are geotagged with the Tweeters current location. Lesson? Don’t geotag your Tweets. Twitter has Tweet geotagging turned off by default and has removed precise locations altogether. Of course, Twitter knows exactly where you are Tweeting from and uses this information to serve you location-based ads. It’s only a matter of company policy that stops Twitter from sharing that information with the likes of DigitalStakeout. I wouldn’t advise relying on company policy.
  • DigitalStakeout poorly configures their searches. For one of the predetermined searches, DigitalStakeout used profile location information to capture Corvallis Tweeters. Lesson? Don’t put your location in your Twitter profile. But they badly configured that, apparently using “Benton” as a search term (Corvallis is in Benton County), returning Tweets from Benton County, Washington and Bentonville, Arkansas. Also, the searches seem to stop after collecting 100 Tweets per week. Useful!
  • DigitalStakeout identifies mostly useless Tweets. We reverse engineered the search terms used for DigitalStakeout’s “Narcotics” search, explaining why the Tweets seemed mostly garbage. Sure, snow, hop, high, line, party, smoke, bowl, rock may refer to drugs, but almost universally pick up Tweets about weather, beer, and kids parties. Also notable are a proliferation of marijuana terms (e.g. indica, weed, pot, bud), while pot has been legal in Oregon for the entire subscription period!
  • Tweets identified by DigitalStakeout seem to arise more from Black and Hispanic people compared to the local population. We can’t make direct comparisons, because the former is determined by a human from a Twitter profile and the latter by self-identification on the census, and there are a whole lot of differences between them, but only 1-2% of Corvallisites identify as Black, whereas over 6% of Corvallis Twitter geotaggers appear to be Black. That seems stark.
  • Tweets identified by DigitalStakeout seem to arise more from White people compared to the Tweeters in the areas. A more direct comparison can be made between Corvallis Twitter geotaggers and those caught up by DigitalStakeout’s searches. The samples are too small to determine if the effect is significant, but there appears to be an increase in the proportion of White Tweeters.
    Corvallis Tweeters Digital Stakeout
    White 71.8% 78.9%
    Black 6.5% 7.2%
    Hispanic 11.7% 7.8%
    Other 10.0% 6.1%
  • DigitalStakeout failed to identify a shooting threat to the Corvallis Police Department. in February 2018 (at a time when we know the Corvallis Police Department was still subscribing to DigitalStakeout), an individual was arrested for Tweets threatening a shooting on the Oregon State University’s Corvallis campus. However, the Tweets were not discovered through surveillance of social media but through an anonymous tip line.
  • DigitalStakeout seems to no longer have access to Facebook and Instagram. Our data covers July 2016 to August 2017, with a three month gap starting April 2016. April 2016 is about when the Brennan Center say that Facebook and Twitter changed their policy to not allow social media surveillance software to access their APIs. Indeed, according to Twitter’s Master License Agreement, the Twitter API “may not be used by […] any public sector entity (or any entities providing services to such entities) for surveillance purposes, including but not limited to: (a) investigating or tracking Twitters users or their Content; and, (b) tracking, alerting, or other monitoring of sensitive events (including but not limited to protests, rallies, or community organizing meetings).” But our data show that DigitalStakeout continued to access the Twitter API after April 2016. I asked Twitter about this, and they said “We require a special review and continuous compliance audit […]. We work with DigitalStrikeout [sic] and we continue to work with them on carefully reviewed and approved Use Cases.” Lesson: Don’t trust company policy.

Racial disparities exist throughout the justice system, including in policing, contributing to a severe over-representation of people of color in US prisons. Given this, we argue that it is important to be able to audit tools used in the justice system for racial disparities. Social media monitoring is simply another avenue for creating disparities, and there are many points at which an inequity could be introduced: including access to social media, adoption of a particular social media platform, interacting with the platform in a way that gives access to monitoring software, and using certain keywords. One’s behavior, even on Twitter, could increase or decrease attention from law enforcement.

Whether the purpose of social media monitoring by police is for sentiment analysis or risk assessment, unless the population that is monitored mirrors that of the police jurisdiction, the bias will result in a skewed view of the population (if used for sentiment analysis) or undue attention on one sub-population over another (in the case of risk assessment). The log files we were able to obtain from the Corvallis Police Department allowed us to better understand this aspect of policing. While I would argue that any programmatic monitoring of social media impinges our civil liberties, at the very least, requiring that log files be available for independent evaluation would ensure transparency of the algorithms that are reshaping law enforcement.

Discrimination and the conference publication system

After Trump’s travel ban was upheld by SCOTUS, I started thinking of ways that we could materially help our affected students and colleagues. Those on visas from “banned” countries are at best stranded. But it goes from bad to worse quickly. Will their parents be able to visit? Will students on F1 visas be able to use CPT and OPT for internships? Will those on H1s have any chance at permanent residency?

Of course, I’m reminded that for students studying in the US on single-entry visas from many more countries than just the “banned” countries, including China and Vietnam, life has always been difficult. For a trip home, you many need many months (and an understanding advisor) to have time to reapply for a visa to return to your studies. Your parents may not be lucky enough to get a visa to ever visit you. And those conferences that are out of the US? Well, those are likely impossibilities, and this greatly impacts your ability to publish and network.

And how about non-US based researchers who are citizens of banned or otherwise unwelcome nations? Can they as easily attend US conferences, such as STOC, FOCS and SODA, which are almost always in the US?

So what we have is a conference publication system that is structurally discriminating, as a result of the institutional discrimination of US border control. (If the notion of individual, structural and institutional discrimination are new to you, here’s a short explanation.) US law restricts the travel of people based on their country of origin and current residency status (institutional discrimination). Our field, for people to succeed, depends largely on their ability to publish papers. Since conference publications are heavily weighted and since every conference I can recall submitting to has a “one author must attend the conference to present the paper” rule, one’s success depends on one’s ability to attend (or have a co-author attend) the conference (structural discrimination). And since conferences are so normalized, it is expected that junior researchers will do the attending and presenting as much as possible so that other people “get to know you” (again, structural discrimination).

But the discrimination doesn’t end there. We have heard reports of the individual discrimination that women face at conferences (for example in statistics and machine learning and theoretical computer science). You don’t have to be an idiot to understand (or you can dig up the science that will help you accept the fact) that when someone faces individual discrimination, often they will avoid the situations where they face that discrimination.

And never mind the myriad of other reasons why people can’t or choose not to attend conferences: disability, care-taking responsibilities, shyness.

Let me spell it out. In order to really succeed in most areas of computer science, you need to publish conference papers and this, for the most part, means attendance at those conferences. But because of the institutional discrimination of border control laws and the individual discrimination that individuals face and the structural discrimination that others face, computer science discriminates based on nationality, gender identity, disability, and family status, just to name a few aspects of identity.

Tell me, how does that discrimination help advance science?

Until we get serious about making real structural changes — fundamentally changing the way in which we are expected to disseminate our work and how our work is evaluated — I fear that we will be discussing the same issues of representation in computer science for decades to come. Perhaps in the past, it was most efficient to share ideas (among mostly white, mostly male) scientists by gathering them together at a conference. But now, in the age of email, videoconferencing, instant messaging, and arXiv, its a little ridiculous that we cling to such old fashioned modes.

So while I don’t have an immediate idea to help students and colleagues impacted by Trump’s “constitutionally sound” travel ban, we could make some positive change for those and others, and computer science more broadly, in making conferences an exception rather than the rule.

IBM and the Holocaust — why wasn’t this on my radar?

I have spent the last couple of years teaching folks about surveillance and what they can do about it, primarily in partnership with the Civil Liberties Defense Center and through a interdisciplinary course on surveillance and social movements.  In reading about surveillance, I had come across, in multiple places, a statement along the lines of “IBM provided the punch-card technology that allowed the Third Reich to perform the census operations that were necessary to identify Jews,” with a citation to the book IBM and the Holocaust by Edwin Black.  I didn’t think too much about it.  After all, all modern surveillance is supported by technology, and I don’t necessarily place the blame on, for example, Intel for manufacturing chips that Hacking Group ultimately buys.  At worst, it seemed (from a single sentence) that IBM would be guilty of the kind of profiteering that many corporations operating in our capitalist world engage in.

Until I sat down to actually read IBM and the Holocaust; now I understand the significance of IBM’s involvement in WWII, and it can’t be summarized in a single sentence.  I will do my best to summarize in a few paragraphs (hopefully a little more succinctly than the book’s Wikipedia article).

Summary of IBM and the Holocaust

First, it is important to understand what punch card technology was like during the early 20th century.  The punch card tabulating machines performed basic statistical, sorting and selecting operations.  They were programmable only by experts and the machines themselves were not bought but leased from IBM (or other companies, although IBM had an effective monopoly on the technology).  In addition, the punch cards themselves were manufactured and supplied only by IBM for IBM machines, and they were manufactured for specific uses.  For example, punch cards would be printed to support the scheduling tasks of a specific railway system or the census of a particular region. The design of the punch card for a specific task was done by IBM employees in cooperation with the leasing group.  IBM directly trained the users of the punch cards and tabulating machines.  That is, IBM had intimate knowledge of the specific uses of their technology in the field.

Second, we need to understand who at IBM would know what their technology was used for.  Most of the tabulating machines used by Nazi Germany were leased by the IBM subsidiary in Germany, called Dehomag, which was 90% owned by IBM. Until Dehomag built its own factory in Germany (after Hitler came to power, and after census operations that would identify Jews were announced), the machines were imported from the US.  Likewise, due to supply shortages in Germany, punch cards were printed outside of Nazi-controlled areas well into WWII. The main reason, in my opinion, for the length of Black’s book is to provide sufficient evidence that Dehomag wasn’t a rogue subsidiary, but instead was heavily micromanaged by Watson (the CEO of IBM at the time).  Black makes it clear that Watson and many other executives at all levels of operations would have known specific uses of their machines.  Watson and other IBM executives made frequent trips to Germany until 1941.  Watson met with Hitler in 1937 and received a medal of honor from the Third Reich.  That is, IBM executives at all levels and internationally would be privy to the specific uses of their technology by the Third Reich.

Third, we need to understand what exactly IBM technology was being used for.  This is possibly the most disturbing part, especially in understanding how much IBM as a whole would have known about their uses.

I already indicated that punch cards and tabulating machines were used in census operations.  This is why IBM and the Holocaust comes up in surveillance literature — census is considered the most basic form of mass surveillance.  Census operations were called by the Third Reich in every territory it invaded.  Nazis were obsessed with identifying all Jews, including “racial Jews”, so they included not only religion on the census forms, but also the religion of ones parents and grandparents.  They would cross-check this with marriage forms and baptismal data held at churches.  Tabulating machines were used to understand how many “full”, “half” and “quarter” Jews lived in any district.  They also helped the Third Reich understand that as they imposed harsh regulations on Jews (e.g. Nuremburg Laws), many Jews would leave to neighboring regions, where the Third Reich would have to confront them again as they conquered neighboring territories, leading to the Nazi’s “Final Solution”: extermination.

Tabulating machines were housed in railway stations to help in the scheduling of trains and keeping track of the location of train cars.  Trains were used to transport Jews and other “undesirables” to and between concentration camps.  The sheer numbers of people being transported (in addition to war supplies) at the time required would not be supported by the 2 week delay in locating train cars that was typical in non-punch-card scheduling systems used at the time.

Finally, tabulating machines were housed at the concentration camps.  Each prisoner had a card that detailed their health, skills and location as prisoners of good health were transported according to labor needs.  Finally, the card also indicated the way the prisoner died: by natural causes (which would include being worked or tortured to death), execution, suicide, and special treatment (including gas chamber).

That is, IBM technology was used to support all aspects of the Holocaust, from identifying and transporting Jews, to managing their populations at concentration camps and recording their executions. Recall how much of this, by design of the punch cards and tabulating machines, IBM would know.

I want to include one story from IBM and the Holocaust which I think is important.  In all the Nazi-conquered territories, census operations made the rounding up and extermination of Jews possible.  However, they weren’t as successful in France, where an estimated 25% of Jews died.  For comparison, an estimated 75% of Jews in Holland died.  In France, the census operations were eventually completed by René Carmille.  However, Carmille sabotaged the operation, preventing any information about religion from being punched into cards.  He also used the census information to mobilize the French Resistance.  He was eventually captured and taken to Dachau where he died of exhaustion.

Why wasn’t this on my radar until now?

IBM and the Holocaust came out in 2001.  I started my graduate education in computer science in 2002.  It took me until possibly last year to even hear a hint of it.  I think this is an utter failure, utter lack of ethics education.  I was never formally taught any ethics.  I have to say that I was barely ever informally taught any ethics either.  Any ethical considerations I picked up during graduate school, looking back now, were not necessarily sound.  I know that I was just one student, but never have I even heard a colleague make mention of IBM and the Holocaust.  We should be talking about cases like IBM and the Holocaust with our students and with ourselves.  A majority of our students will go on to work for companies just like IBM.  And if they aren’t taught that tragedies like the Holocaust happen because everyone was just doing their job, we are liable for the continued abuse of computer science.

The law does not and should not define our morals

I attended the Advanced in Security Education workshop earlier this week where I gave a lightning talk about our Communications Security and Social Movements class.  The last lightning talk (which, unlike all the others, was allowed to go far longer than the 5 allotted minutes), by Anthony Serapiglia of Saint Vincent College, was about his class on forensic training where, as part of the class activities, students retrieve data from old cell phones (mostly flip phones) that are in a variety of states (e.g. seemingly broken, infected with ransom-ware).

Cellphones available from Goodwill for $6.

Serapiglia described how easy and cheaply one can get these cell phones from Goodwill’s online store (“by the pound”).  That’s right, he is using real people’s old phones that were discarded (in many cases it seems that they were discarded because people thought they were unusable), lost or stolen and ended up at Goodwill.  He has students retrieve personally identifiable information from the phones, phones that he himself has not looked at the data on.  He illustrates the retrieved data in a series of increasingly sensationalized photos: a family picnic; a child playing; a full frontal nude selfie; and 4 photos from the same phone of a man and his girlfriend, a hand holding presumably illicit drugs, a bookie sheet, and a man lying unconscious beat up (which Serapiglia describes as the owner of the phone having not paid his gambling debts).

I was feeling increasingly sick as I listened to his talk until he brought up a slide that had some mention of ethics on it, at which point I felt even worse.  His take on the ethical considerations were that of protecting his students from the pictures and other data they would uncover from the phones.  On the one hand, his mention of a media spot that he claimed to have done before the course started describing how easy it is to get data off old phones seemed to be his personal stand-in for informed consent. On the other hand, his saying several times that his university’s legal counsel gave the green light for the use of people’s presumed discarded data in his class, indicated that he was using the law as his moral and ethical guide.  He ended the talk asking us if we would use such real data in our classrooms, and, given the outline of his talk, the only consideration seemed to be out of concern for what the students would be exposed to.

I was relieved that my hand wasn’t the only hand to snap up after the talk.  The first questioner pointed out that such data use would not be legal in Europe and (paraphrasing) that European law, in this case, more closely aligns with morality.  I pointed out that if one uses the law to define ones ethics that you will far short every time and that the law in the US drags behind commonly held societal beliefs.  Another questioner suggested Serapiglia try an experiment to see if his presumed implicit consent closely matches what he would get from explicit consent: put out a call for volunteer cell phones to be used in the same manner; if he doesn’t get any donations, then perhaps he shouldn’t used “discarded” phones either.  Except for one audience member who thought it would be okay to use such an exercise in a graduate, but not an undergraduate, class, everyone admonished the use of people’s old cell phones without their explicit permission.

We have seen more than once, the treatment of electronic data as divorced from the people who created it, claiming at least implicitly that it does not warrant the same level of human-subjects-research protections that we afford people in person.  And while IRBs don’t evaluate non-research use of human data, that doesn’t mean that we shouldn’t apply the same principles to our classroom activities.  Perhaps even more importantly we should be aware of the ethics we are implicitly teaching through our classroom activities, as they shape our students.

Teaching Communications Security and Social Movements

My reaction to the Snowden disclosures was a mix of “I’m not surprised” with “this is a lot worse than I imagined”.  Not long after, I went to a small workshop and during breaks tried to engage colleagues in discussing the implications of the capabilities of the surveillance state of which we were now at least partially aware.  Responses were at best disappointing.  The near-universal apathy was disturbing.  Most of the people I spoke with teach undergraduate computer science.  If they don’t care about the ethics of a surveillance system whose maintenance and expansion depend on our graduates, what hope would there be for change?

In thinking, “what more can I do from my highly privileged position?”, my partner and I have been teaching activists in social movements to use end-to-end encryption and other online self-defense techniques.  We’ve also designed a freshman course to teach online self-defense.  Since we are in particular concerned with the impact of state surveillance machinery on social movements, the course is offered through the Difference, Power and Discrimination (DPD) program at OSU and so addresses institutionalized systems of power, privilege, and inequity in the United States.  We will be teaching the technical concepts for understanding online surveillance and the encryption tools that can mitigate it alongside the historical and contemporary impacts that state surveillance has had on social movements.  The course will be offered for the first time this coming Spring — CS175: Communications Security and Social Movements.

From the reading that accompanied the development of this course and our trainings, it became clear that teaching CS175 through the DPD lens was a good move. Muslim populations in the US are subject to heightened surveillance, scrutiny, infiltration, provocation and entrapment.  The Department of Homeland Security monitors those involved with #BlackLivesMatter.  Historically, we know that surveillance is key to suppressing groups that challenge the state (for example, the Black Panther Party and the American Indian Movement) and the mass collection of data on targeted populations has facilitated genocide.  Mass surveillance doesn’t affect us all equally — mass surveillance is disproportionately directed at marginalized groups such as people of color.

To hear about this in our College’s new podcast, start listening here at 15:10.

5 talks in a lecture series at OSU. Only 4 are promoted on OSU’s YouTube page. Why?

4623949606In fall 2015, a student group, Allied Students for Another Politics (ASAP!), on campus organized a series of five panels called “Radical Visions Towards Another Politics”:

  • Revolutionary Unions and the Abolition of Wage Slavery
  • “We Won’t Pay!”: How Debtors’ Unions and Strikes Can Lead the Fight for Tuition-Free Education
  • Racism, Capitalism, and the Prison Industrial Complex
  • From Baltimore to Palestine: Israeli Apartheid and the New Jim Crow
  • The Burdens of of Climate Change and Economic Growth: Visions for Social and Environmental Transformation

With the support of the School of History, Philosophy and Religion, the panels were all taped and promoted on the OSU School of History, Philosophy and Religion YouTube page. Well, actually, all the panels were recorded, but only 4 of the talks made it onto the OSU YouTube page.  Can you guess which talk did not?  Thankfully, one of the student organizers was able to get a hold of the video of the From Baltimore to Palestine talk and make it available here.

I won’t comment on why this one particular talk was not made available, as I have only heard second-hand the reason.  But hopefully we can as a campus bring light to why, or have the fifth talk promoted with the same level of support of the other four talks.

OSU faculty call for fossil fuel divestment in open letter

[from the press release]

More than one hundred members of the academic community at Oregon State University have signed an open letter calling on the OSU Foundation to divest from fossil fuels. Signers include professors, staff and graduate students from across a wide range of university departments. They include 14 faculty from the College of Occean, Earth and Atmospheric Sciences, which is recognized for its world-class climate science. A committee of the Foundation will meet on Friday to discuss the question of divestment.

Among the signers is Kathleen Dean Moore, Professor of Philosophy at OSU, and author of a recent book, Great Tide Rising, on how to respond to the climate crisis. When asked about divestment at OSU, she said that the arguments made against divestment are riddled with flawed logic. “Divestment is about saving your own integrity,” she said. “Divestment will not bring down the fossil fuel industry, but it might allow the university to claim that it really is acting in the interests of its students.”

Another signer is Peter Clark, Professor of Earth, Ocean, and Atmospheric Sciences. He recently led an analysis, published in the highly-regarded journal Nature Climate Change, which found that without a rapid transition to non-fossil energy systems, emissions from burning fossil fuels in the coming decades will commit us to dramatic changes in climate and sea level that will last for the next ten thousand years and beyond. According to the open letter, “divestment is one action among many that will be needed to shift the social logic away from coal, oil and gas, and propel our economy toward cleaner sources of energy.”

Ben Phalan, a Research Associate in the College of Forestry who wrote the letter with colleagues Glencora Borradaile (College of Engineering) and Ken Winograd (College of Education), said: “each of the last two years were the hottest since records started in 1880, and this year is on course to be hotter still. Divestment alone will not stop climate change, but it is an important step in the right direction. It sends a message to companies that it is unacceptable to make short-term profits at the expense of poorer countries, future generations and other species, all of whom are hit hardest by climate change. And it gives a mandate to governments to increase support for low-carbon energy and fulfill the pledges they made in Paris.”

In the past two years, the OSU Faculty Senate and the ASOSU Senate have passed resolutions in support of divestment. A ballot of OSU students last month found that 78% supported divestment from the top 200 publicly-traded fossil fuel companies.

[see the full letter with all 101 signers here]

Fossil Fuel Divestment at OSU: A Brief History

Students march on OSU campus, demanding divestment from fossil fuels.

Students march on OSU campus, demanding divestment from fossil fuels.

For nearly 3 years, I have worked with faculty, students and community members on fossil fuel divestment at Oregon State University, asking the Foundation (who manages OSU’s half-billion dollar endowment and manages much of the university’s messaging as an independent entity) to stop investing our endowment in the fossil fuel industry. In fall 2013, the Faculty Senate at OSU passed a resolution demanding the Foundation divest from fossil fuel investments. In winter 2014, the Student Senate passed a resolution demanding the same. After months of trying to get a meeting with the OSU Foundation, in spring 2014, two Foundation Trustees and several staff members met with a half-dozen students and faculty to discuss the possibility of divestment.

coal-delivery

Students deliver a Christmas gift of coal to the Foundation.

The meeting was a non-starter. One of the Trustees, Greg Merten, was a climate-change denier. Not an anthropomorphic climate change denier, but a flat-out, belligerent climate change is not happening clinger-onto. On the one hand, this was disrespectful. Here, a group of people were hoping for an honest conversation about the actions we can take to support mitigating climate disaster only to have an instrument of our own university send someone who wouldn’t let the conversation even start to discuss supportive actions because there is no need to do anything in response to something that is not happening. On the other hand, this was down-right embarrassing. For a university, where so much climate research is done, where so many of the confirmations and consequences of climate change have been discovered, to have their messaging being managed by someone who doesn’t believe in the very science that comes from our institution is … well, frankly unsurprising. After all, Greg Merten is wealthy.

Needless to say, our request for divestment was denied, citing ‘fiduciary’ responsibility. Dollars ahead of ethics. Well done, Foundation.

A simulated oil spill on campus.

A simulated oil spill on campus.

Fast forward two years. After a hottest year on record, renewed student interest, two fun protests on campus, and a student referendum in which 78% of students demanded fossil fuel divestment, members of the OSU Divest student group were recently graced with another meeting with the Foundation to revisit the issue of divestment. Only two students from the OSU Divest student group were intended to meet with two trustees and several employees of the Foundation. In a show of strong solidarity, another 10 or so students and faculty, members of Allied Students for Another Politics! and Rising Tide Corvallis, also attended the meeting. The Foundation was clearly caught off guard, quickly pulling more chairs up to the table and more mugs to the coffee stand.

Despite a lack of preparation time, the students at the meeting did an amazing job of holding their ground, not backing down to a weaker proposal, not letting falsehoods be perpetrated by Foundation trustees and staff. Compared to the meeting in 2014, the Foundation seemed more prepared to listen, although one trustee was, while not an outright climate denier like Greg Merten, in favor of pointing a blaming finger at the global south for future emissions.

Molly Brown, a director at the Foundation, made reference to those contacting the Foundation asking them to not divest. Immediately before our 2014 meeting, the Foundation met with a group of unnamed people who, reportedly, gave arguments for not divesting. Molly Brown also referenced people opposed to divestment who had contacted the Foundation in the last two years. I don’t doubt that this has happened. When I first got involved in fossil fuel divestment, I received a couple of nasty emails from College of Engineering faculty members (a nerve-racking experience for my pre-tenure self). It took but two seconds to discover that these same faculty members enjoy direct funding from the fossil fuel industry for their research and summer salaries. Since divestment has been a topic at OSU, there have been three letters in the alumni magazine opposing divestment by two people (and 4 letters in favor, by 4 different people); a few clicks on a search engine uncovered that divestment opponents have direct financial conflicts-of-interest with the fossil fuel industry: Barry McElmurry is now retired from Fluor Daniel Inc (oil & gas infrastructure construction company) and Mike Moehnke is a district manager of Columbia Steel (“well-known in surface coal mining for its dragline chain”). I’m sure that all those who are vocally opposed don’t have such egregious conflicts of interests — surely some of them are just clinging desperately to neo-liberal ideology so they can maintain their excessive lifestyles without guilt.

A big point to fight against, for me, is the fact that the Foundation is placing the back-door, face-less anti-voices on (at best) an equal or (more likely) a higher footing than our organized, public movement of thousands of faculty, staff, students and community members who have voiced their opinion through petitions, letters, protests, referendum, and resolutions. Again, I am not surprised. The anti voices are telling the Foundation what they want to hear. They also probably have more money.

In the meantime, the fight continues. We are currently collecting faculty and staff signatures on an open letter to the foundation demanding divestment (soon to be released to the public); we currently have nearly 100 signatures with many from the researchers whose work tells us how big a disaster climate change is bringing.

Faculty hiring decision processes

In a recent faculty meeting, we discussed the process by which we make hiring decisions.  A college-level rule seems to dictate that the faculty provide feedback on the candidates for a given position, without ranking the candidates, to the unit head.  It seems the unit head then has final decision on the order in which offers are made.  While I disagree with the latter (no one should be surprised), I agree with a decision-making process that avoids rankings.  While I understand that at some point one needs to pick one candidate to make the first offer to and that very much seems like ranking, I think it is in the interest of ensuring unbiased hiring decisions to delay the discussion of ‘first offer’ as much as possible.  At the faculty meeting I tried to offer a process for discussing hiring decisions with this in mind, but since I hadn’t prepared and didn’t ‘have the floor’, so to speak, my comments were somewhat disjointed.  I will try to describe my thoughts and motivations here as succinctly as possible.

First, I would like to propose a consensus-based decision process for determining a subset of acceptable candidates – candidates that our faculty would be okay with joining our ranks.  Note that a consensus decision isn’t necessarily your favorite decision but a decision that you are okay with moving forward with.  There are three concepts (abstaining, blocking and consent) which I will describe in the context of hiring. I think one should abstain from participating in discussion if you have not spoken with the candidate and not seen the candidate’s talk.  (Note that our faculty candidate talks are video-taped and I do not think it too high a bar to expect one to watch the talk if they were absent for the interview before participating in the discussion.)  If one blocks a candidate, one is saying that they are absolutely unacceptable to hire – this is non-consent to hiring and should be taken seriously.  In this case, I think one should have to articulate, in front of the faculty, why you are blocking and the other faculty should be allowed to discuss whether or not the block is valid.  For example: if an AI faculty blocks a PL candidate because their publication record isn’t strong enough but all the PL faculty disagree, this wouldn’t be a valid block; if the members of the graphics group think that a graphics candidate is impossible to work with because they couldn’t hold a conversation for more than 5 minutes and didn’t have any common research interests, this would (probably) be a valid block.  The third option is consent: if you aren’t abstaining and you aren’t blocking, then you should be okay with the potential hiring of this candidate (even if there is a different candidate that you prefer).

At this point, we have a list of acceptable candidates – note that the discussion focuses more on eliminating candidates for being unacceptable and that they should only be considered unacceptable for serious concerns.  My argument for doing this is to minimize bias in faculty hiring decisions.  There are many, many quantitative studies that show that people (male and female, white and black, to greater and lesser degrees) think less of job applicants from non-dominant groups than dominant groups (in terms of social, racial, gender identity).  Rather than comparing the candidates pairwise and asking “better or worse”, if we consider each candidate on their own merits and ask “acceptable?” we are less likely to see impacts of implicit (or explicit) bias in our decisions.

Second, I would encourage a discussion about the merits of each candidate in the acceptable list in terms of what they would add to our department.  In the faculty meeting I picked a colleague and said the “best” candidate may be a clone of this colleague, but wouldn’t add much to the department, since we already have one.  I would encourage people to avoid using bean-counting (papers, money, etc) and instead think about certain research talents/abilities, teaching capabilities or interests, and yes, identity characteristics.  For example, say we have two candidates; the first candidate is a black woman with 10 papers in good venues and does research that we are interested in and the second candidate is a white male with 20 papers in good venues and does research that we are interested in.  I would argue strongly that the first candidate would add more to our department; this may also counter any additional challenges that she overcame (as supported by data, studies, etc) in getting to where she is compared to her white, male counterpart.  At the end of this discussion, we haven’t ranked the candidates, but for each candidate we would have a list of what that candidate would add to our department.

At this point, it seems that our administration would want us to stop.  But since I believe in non-hierarchical organizing, I would like to imagine a world in which the faculty get to decide who gets the first offer to join them as faculty.  (I didn’t really get to this point in the faculty meeting.)

So, third, based on the discussion of what each acceptable candidate would add to the department, we could start the ‘first offer’ discussion.  I would hope that the second-phase discussion would help narrow down the candidates for ‘first offer’.  The lists of what each candidate adds may even be considered a start of a ranking, although not all additions may be considered equal (or positive).  It might be helpful to take a temperature check: for each candidate indicate yes/no/neutral as to whether you would be okay (again, expressing consent not preference) with them receiving the first offer.  For each candidate for whom there isn’t consensus in this temperature check (all yes/neutral or all neutral/no), one may delve into a deeper consensus-developing process (which I think I will keep for another post in the interest of length); this would be necessary if there is no candidate for whom everyone is yes/neutral.  However, if there is one yes/neutral candidate, then this might just be the first-offer candidate.  If there is more than one, more rounds of discussion and temperature checks might need to happen … as with most things, without trying it out, there is no saying how it would work and (in my opinion) there is little point in developing a process further without actually putting it into practice.

Much of what I have said is based on my 7 years of experience in faculty hiring discussions at OSU and thinking and reading about (and using, in community groups) decision-making processes.

Graduate Teaching on Diversity: Recap

This review of the diversity & ethics class is much delayed.  Partly this is because in the last class I gave a survey to the students asking about their experiences in the class — it was an in depth and targeted student evaluation of teaching and I hate looking at these evaluations.  I think I always fear the worst.  Maybe due to some highly sexist comments from the past (“she looks better in a skirt”, I kid you not).

I shouldn’t have been so fearful — the surveys were taken very seriously and I learned a lot from going over them that I hope will shape a permanent course in our department.

I’ll try to summarize here:

What did you get from the course?

I asked several questions along the lines of what did you get from this course? The answers to these questions (*) helped me to uncover how I did in delivering on the pilot learning outcome:

  • Recognize difference, power and discrimination within social systems and their influence on people of diverse backgrounds both inside and outside their discipline.

A vast majority of the students reported an increased or new awareness of explicit and implicit biases in our perception of others along with the ability to better empathize with people who are different from themselves.  Many students reported that they would be a better ally to others experiencing discrimination and that they would have less fear in engaging and expressing oneself.  Several students commented on the ethics content of the class as well, that the online Responsible Conduct of Research modules were informative.  Based on this, I would give myself a B in delivering the learning outcome; I think I didn’t do a very good job on focusing on these issues within electrical engineering and computer science.

Many students commented on the soft skills they improved upon in the class: new mechanisms for group discussions, communication skills, reading skills.  A few students commented that the material and the way the class was delivered was ideal for improving on these skills, and I would agree!

Which readings were most helpful?

A majority of the students commented on the first-person narratives being most helpful, and I am not surprised.  I think I would like to add in some more ‘formal’ readings to give more broad context to the course material, but I would try to be very careful to keep the balance on the narratives.   A significant number of the students also commented on the reading Never Meant to Survive;  I think this was a great reading, but I think it stood out in students’ minds because there was a very focused deep reading assignment that went with it.  I would like to do something like this for every reading (or group of readings).

Several students commented on the fact that there was too much reading.  Although the surveys were anonymous, based on their other comments, these were English-language learners that were finding the reading burdensome; I am not sure how best to react to that.  Some of the same students reported that their reading skills improved, so perhaps it is a hurdle that is worth having.

How did the classroom mechanisms work for you?

If you’ve been reading these posts, then you know I made an effort to have the classroom be interactive as much as possible and I experimented with different ways to do this.  I asked the students which classroom mechanisms were most helpful, most enjoyable, least helpful and most challenging to them.  For details on these mechanisms, please read my earlier posts. Some patterns that stick out based on this:

  • A majority of the students found the small group discussions to be helpful, for multiple reasons (hearing other students ideas, not having to speak in front of the entire class, practicing communication skills, etc).
  • Many students found the spokescouncil discussions challenging and I think this reflects students being pushed out of their comfort zones.  Other students found this helpful and enjoyable.  I think it is definitely worth trying this again (with more than one hour to hold it).
  • Many students found the silent discussion to be least helpful or challenging.  The detailed comments reveal that many students had a hard time with the questions used for this.  As I reported earlier, I felt at the time that some better readings would hopefully prepare the students to better engage in these questions.  A few students commented that they liked the silent discussion because they felt more comfortable expressing themselves in writing than in speaking.  As we try to accommodate different learning styles (e.g. oral vs. visual) I think it would be good to use a silent discussion again to meet the needs of students who would enjoy this method of communication.  I think the silent discussion could be improved by having a post-assignment that requires students to synthesize the net comments made, to try and have the students engage more deeply in the listening side of this discussion.
  • The reaction to the round-robin discussion was mixed.  Those who liked it, liked it because it helped them learn the material better (the vocabulary for diversity) and that it forced them to talk to a lot of different students.  Those who didn’t found the task not that deep.  I think there is a split along comfort in English here …
  • Finally, the lecture and faculty panel evoked rather amusing responses.  Students enjoyed that they were hearing from other people and from experts.  Many students commented on how they could sit back and not engage — and there were students who liked that, and students who didn’t like that.  I’m not surprised.

What would you like to see?

I really enjoyed reading the students’ suggestions here.  Some topics that were mentioned (some more than once): income inequality, religious bias, non-conforming genders, ethics of human and animal experimentation.  I would love to cover more breadth of discrimination as highlighted here!

A few students asked for more experiential learning — there is a great exercise that illustrates structural discrimination before learning about structural discrimination that would be a great early-in-the-quarter exercise, for example.  One student asked for better moderation of discussions (which I would love to work on too!) and another asked for more positive content, which … well, would be challenging.  But perhaps it would be good to talk about the response to discrimination from large social movements, like Black Lives Matter, to see that one can fight against this.

Another few students asked for an introduction to US culture, which left me stymied. How?  But then another student’s comments came to the rescue: have a classroom discussion with US students comparing their culture to international students (and vice versa).  Again, this would be a great way to start the class, I think.  Along these lines, one student asked for more mixed group activities; they noticed as I did that students tended to cluster along cultural lines for small group discussions.  I suppose, again, one would want to allow this sometimes but that it might be good to mix the mechanism up and force students to talk with students they normally wouldn’t.

 

Several students asked for some true orientation content.  Things like information about funding, what grad classes are like, a campus tour!, panels with senior graduate students.  I’m going to chew on all this information for a while, but I have been thinking about proposing a 3 credit (3 hours per week) class that takes the place of our graduate seminar, TA training and this class that would allow for the time for orientation content, soft skill development, and more in-depth coverage of the material.

(*) Specifically:
What were the most useful skills or tools that you learned in this class?
How will what you learned in this class prepare you for your time here as a graduate student and your future career?
How has this class affected (or how will this class affect) your own personal behaviors and actions? Why?
If you were trying to convince a fellow student to take this class, what would you say?