A lens on the sciences

The workshop on Theory of Computation as a Lens on the Sciences at Berkeley was this weekend. For the three people out there that haven’t seen Christos Papadimitriou’s talk on the same, I recommend it.  A follow-up to a focus on the use of theoretical computer science as a methodology for study in seemingly distance fields (economics, physics, biology, etc.), Richard Karp, who opened the festivities, lauded the near-decade-long effort that has (among other acheivements) motivated new programs at NSF.

I was only able to attend the first day.

Ehud Kalai started and end with the same question “are games too hard to solve?” and while admitting that the answer has remained “yes” for decades, argued for the value in studying simple(r) classes of games that allow predictions and solutions, presumably offering intuition for more complex and realistic games.

Christos Papadimitriou was entertaining, as usual, giving an overview of the ties between economics and TCS using the examples of sponsored auctions and selfish routing.  I’m hoping they will post slides or videos (the talks were being recorded, so I imagine this will be the case) as he gave a great quote on nobody having an incentive to change if nobody changes.  (As an advocate for alternative transport, this is a sentiment I’m often up against.)

Tim Roughgarden “discussed”, including the short history of sponsored search and referred the to “turn of the century”.  I sat confused for a few seconds … turn of the century … how did sponsored search come to be studied at the turn of the century … oh … turn of the current century … have I ever used that phrase to describe 2000? … certainly not … have I heard anyone used that phrase to describe 2000? … I must be getting old … hang on … Tim Roughgarden is older than me … do I live under a rock?

The next two two talks were on avenues of research unfamiliar to me.  Les Valiant spoke on attempts to study evolution (yes, the biological one) using ideas from learning theory.  I was rather skeptical of the model of evolution under study (as was a questioner after the talk), but I’ll give the benefit of the doubt that studying simple models can lead to some intuition.  I just hope that anti-evolutioners don’t take any results out of context as ‘proof’ for in the invalidity of evolution.  Michael Kearns discussed the use of crowd sourcing to solve graph problems in a local method.  His primary example (with actual experiments with actual people – crazy, right?) was graph colouring, where each participant is a node and can only see the neighbouring nodes.  Perhaps surprisingly, the crowd was not able to 2-colour a circle, but was able to 4-colour a preferential attachment network.

The most entertaining talk for me was Mark Newman’s.  On the structure of actual networks (both natural – biological, social – and technological), his talk shed light on some interesting properties and motivations.

  • The small-mouth bass is cannibalistic.  Food chain networks need self-loops.
  • Interesting networks almost always have one big component – for if it had several big components, the chance that no edge crosses between those components is low, and if it had no big component, no one would be interested in studying it.
  • In epidemiological graphs, the interesting quantity is the square of the degree: a person of high degree is both more likely to catch and more likely to pass along a communicable disease – each proportional to the degree.
  • Social networks have positively correlated degrees – popular people have popular friends.  Technological networks have negatively correlated degrees – most neighbours of hub nodes are spokes.
  • Only 40% of friendships are reciprocated.  Get on board, facebook.

So thanks to Berkeley for hosting.  I’m loathe to complain as the talks were excellent (despite being free), but I would ask for more clear breaks … 3 hours straight of talks makes a trip to the restroom tricky if you are sitting mid row.  Of course, unlike the mens’ room, the ladies’ room was not going to have a queue, with only one female speaker (on day 2) and my estimated gender (im)balance of 20:1. Surprising given the high ratio of students present.  Berkeley, Stanford, etc … are your numbers really so off?

Print Friendly, PDF & Email

3 thoughts on “A lens on the sciences

  1. Christina

    Bravo for a great post. I really believe more (notice I didn’t say all) algorithmic research should be inspired by problems in other scientific disciplines. I really believe the future of high-impact computer science research is in inter-disciplinary study.

    I thought I would pitch in two cents about phylogenetic algorithms (aka determining an evolutionary tree). On first sight, it may appear esoteric by studying evolution using machine learning algorithms but the use and application of these algorithms goes a lot deeper. There is a plethora of research in trying determine portions of the human genome that are under selection during the long term use of HIV-1 drugs that makes use of these phylogenetic algorithms. This (type of) research would not be possible without (a) efficient, sophisticated algorithms and (b) HPC (high performance computing.

    Another example of the necessity of algorithmic research in Biology is full genome sequencing. DNA fragment assembly is a computational step in genome sequencing that is (in my opinion) purely an algorithmic problem.

Comments are closed.