Category Archives: Silent Glen Speaks

Experiments in teaching: am-I-ready-for-this? quiz followup

One of my experiments in teaching this quarter was to have a quiz the second week of class on material that I considered so basic, that if you couldn’t do very well on the quiz, well, you may well consider (re-)taking the undergrad algorithms course first.  A few students with lower scores on the quiz did decide to drop the class.  Well, term is over now and I can see how good an indicator this quiz was.

Shown below are the student grades on the midterm and final (y-axis, midterm ‘o’ and final ‘*’)  vs the quiz score (z-axis) – plots are linear.  The brighter the shade of green for the vertical bar connecting a students exam scores, the higher the final grade for the student.  While there are a few outliers, I think that I wasn’t wrong in saying that a low score on the quiz may indicate that you aren’t ready for grad algorithms.  What you can’t see as well here is that the midterm scores were quite linear with the quiz – not as surprising as the midterm covered material that I would expect starting CS grad students to know anyway.

So I’ll probably do this again next year.  I’m better informed though.  I’d like to include a few harder questions to give a better indication of mathematical maturity than this year’s version provided.  My goal?  A perfect indicator, so that I can skip all forms of evaluation and simply assign the students a grade based on the first quiz.

What does arXiving mean?

What does it mean to post a paper to arXiv?  More specifically, a paper that has not been accepted to a peer-reviewed venue; less specifically, to any easily-searchable, time-stamped, respected depository.

Scenario A: You have a result, but there is no decent deadline for another few months.  Maybe you know that a ‘competing’ team is working on the same result.  Should you post to arXiv?  Would that actually protect you from being scooped if someone else published the result in the meantime (perhaps at a venue that you deemed unsuitable)?

Scenario B: You are building on result B that has appeared in arXiv, but has not been accepted (yet?) at a peer-reviewed venue.  You have verified the work in B.  Can you reference an un-traditionally-published work?

Scenario C: You are reviewing a paper C and, being a diligent reviewer, you brush up on the latest in the area.  You find a very relevant paper posted on arXiv, paper X, dated before paper C would have been submitted.  Paper C makes no reference to paper X.  What do you do if: Paper C seems awfully similar (similar techniques, similar results) to paper X? Does your opinion change if Paper C is a subset or superset of paper X?
I suppose as a reviewer, you would review the paper and point out paper X to the editor/PC member.  But as an editor/PC member, what do you do?  After all, it is possible for independent researchers to come up with the same result using similar techniques at the same time (I have seen this happen).

What does arXiving mean?  Does it do more than provide an easy repository for papers?  Do we (in TCS) treat arXiv differently than other areas?

Writing reference letters

I was just sitting down to write the first1 reference letter that I have ever written and realized that I have never read a reference letter and have little idea of what should go into one.  This particular letter is for a graduate student applying for a fellowship.  Short post, but any suggestions?

Maybe I should start tweeting.

1 This is actually the third reference letter I’ve written, but the first was to be read by a close friend in the math department and the second was to be read by me.

A skulk of FOCS talks

The FOCS talks are now available online!  I waited until now to report on FOCS for this very reason.  I am not about to compete with Suresh or Lance for live blogging conferences.  I’m not sure where they find the time amidst talks and meetings in hallways to do so.  (I have to say, whenever I can’t make it to a conference, I very much appreciate such posts.)  I did manage to find the time to circle a few listings in my wonderfully compact one-page program as a reminder that I liked these talks and to point my students to them.  Of course, now I’ve forgotten why I liked them, but I can almost certainly guarantee that I was kept engaged throughout the talk and learnt something – a vote of confidence if I ever heard it!

In no particularly order (unfortunately it is not possible to link directly to the talk, so you’ll have to go and find them in the list):

  • The geometry of scheduling presented by Nikhil Bansal.
    Coauthored with Kirk Pruhs. I added the ever-so-wonderful note to my program: ‘neat not tight geometry problem’.
  • Fast approximation algorithms for cut-based graph problems presented by Alexander Madry
    I probably enjoyed the fact that Alex did not stand at the podium but walked around, indicating things on the slide (novel!).  Unfortunately the camera does not move from the podium.  Ghost speaker!
  • Approximating maximum weight matching in near-linear time presented by Seth Pettie
    Coauthored with his student, Ran Duan.  I’ve always enjoyed and always learned something from Seth’s talks.  I wonder if we could have a rating system for speakers with little stars in the program so that you can attend talks well outside your comfort zone if you know the speaker will be good?
  • A separator theorem in minor-closed classes presented by Ken-ichi Kawarabayashi.
    Coauthored with Bruce Reed. This talk had an amazingly thorough introduction that perhaps those new to H-minor-free graphs might appreciate
  • Logspace versions of the theorems of Bodlaender and Courcells by Michael Eberfield.
    Coauthored with Andreas Jakoby and Till Tantau.  I love the example tree decomposition and his slides more generally.  I keep meaning to ask him for his slides to get that tree decomposition figure …
  • A nonlinear lower bound for planar epsilon-nets by Noga Alon.
    Some of the best humour at the conference.
  • And of course Dan Spielman’s Nevanlinna-Prize talk Laplacian Gems. I have heard that his prize talk at ICM was amazing too, but I haven’t watched it yet.

So there’s three hours of fun theory listening for you.  I think they would pair well with a fine bottle of Oregon Gewurztraminer.

Women are not at an advantage in our field

I was asked a question a few months ago:

Do women have an advantage in our field?

There was a time when I would have chirped ‘NO!’ and stormed off.  That time might not have been too long ago.  But it is an interesting question, perhaps because it is so ill-defined.  What does advantage mean?  Which women?  Undergrads, grad students, faculty?  What is our field?  Computer science in academia, research labs, industry; theoretical computer science?

The arguments I have heard for ‘yes’ are all closely related.  Because you are a minority, you stick out and garner more attention.  Because we have all been told that we have to do something about the gender inequality, we go out of our way to make sure you are taken care of.  Because the higher-ups tell us that we need to improve our 10% rate, we have affirmative action policies so that we hire you.  I wish I’d done a better job over the years of keeping track of the various studies pointing to increased attrition rates for women at every stage of educational and professional advancement, women being judged based on their accomplishments and men on their potential, women needing to perform at a much higher level to reach the equivalent level as their male counterparts.  But I haven’t kept the links around; they can’t be that hard to track down, but I’m on a shuttle at the moment.  Instead I’ll give you my personal view.  The view I usually give when I am asked this question in person.

First, not all attention is good attention.  I do believe that when I meet someone at a conference that they are more likely to remember my name than I am to remember theirs.  Women do stick out when they are only a tenth of the population.  But often enough I have had the experience that I am not sought out for research conversation but because I am a woman.  Not because I am a computer scientist.*  Even though this may have only happened a handful of times in countless interactions, it makes me question whether all the truly professional interactions have really been so.  It makes me wonder: does this person even respect me as a computer scientist?  When/if it comes time for tenure letters, do I have to blacklist people who I feel see me first as a woman and then as a computer scientist?

At Waterloo, an undergrad told me that she was tired of all this “women in math” stuff she was expected to do.  She just wanted to study math.  So yes, sometimes the extra effort isn’t always positive.  At the training level, this extra effort can be viewed as unfair and undeserved attention that puts women at an advantage over men.  This perception itself lessens the advantage.

And then there is affirmative action.  A comment from a fellow grad student at Brown: “Well, you don’t have to worry, women have a much easier time getting jobs in our field [because of affirmative action]”.  Again, the misperception.  The intent of affirmative action is to overcome the (possibly subconscious) gender biases that are known to occur in the hiring process.  It is/should not the preferential hiring of candidates who are not competitive.  So long as we still hear comments like “she only got the job because she was a woman”, woman are not at an advantage.  And if you think this doesn’t happen, you just need to read the comment thread on the who got jobs where post over at Computational Complexity.

So, it’s my personal belief that woman are not at an advantage while training or working in academia.  I can’t speak for industry, but I can’t imagine it is much different.

* Yes, I realize that this is a two-way street, but I would argue that the gender inequality causes it to happen more often to women than men.

Pedagogical excuses: bad penmanship

Finally an excuse for my bad black/whiteboardpersonship!

Apparently, retention of information is improved if the way it is presented is difficult to read:

… if something is hard to see or hear, it feels disfluent … We’d found that disfluency led people to think harder about things.

Aside: the lead author, Connor Diemand-Yauman, either has a surprisingly non-unique name, or partakes of reality tv-show contestantship while majoring in psychology.

Experiments in teaching: problem-solving sessions

In a more significant experiment than the am-I-ready-for-this quiz, I am rethinking the assignments that accompany my grad algorithms course.  In last year’s class, I had the grad students work in randomly-assigned and rotating (different for each assignment) groups.  I will comment on this in another post.

I’m sticking with the group-based approach – partly for feasibility.  But rather than having standard written submissions and written comments/grades, I am having the students participate in a type of problem-solving session; and idea I stole from Claire Mathieu.

Each group will prepare solutions to some (2) problems ahead of the 2-hour problem-solving session.  Each group (A) will explain the solution to one of their problems X (picked by me) to another group (B) who will then explain the solution to me, with instant feedback/help/cleaning. Group B should leave the session satisfied that they understand the solution to X and will prepare a written solution within 2 days.  The grade of both group A and B will depend on the oral explanation I was given and the written solution to problem X.  Every group will take the role of teacher/student for one problem (that is, group B will then explain the solution to one of their problems, Y, to group A).  The written solutions will be placed into a (private-to-OSU) repository for other groups to see.  For details, see here.

Students are encouraged to repeat this process for other problems that they did not solve or learn; there are as many problems as groups (12) and every student knows who has solved each problem.  I’m hoping this will be a helpful, less lonely, way for them to prepare for the midterm and final (which will determine the bulk of their grade).

I’m hoping that this will help students learn to solve the types of problems they will be asked on the midterm and exam, and (more importantly) that they might face in their research (or in job interviews).  I’m also hoping that it is a more effective use of class time than hearing me lecture for another 2 hours a week. (I have 4 total hours per week of class time).

As before, I will (bravely) ask my students to comment.  I will try and do my best to take the comments into consideration to improve the remaining 5 problem solving sessions.  I have already received one comment that will take effect next session: in the last session, some problems went undiscussed; in future sessions, every problem will be discussed (by some pair of groups) and posted to the repository.  Comments from non-students are always appreciated!

Experiments in teaching: am-I-ready-for-this? quiz outcome

Last week, I gave the students in my grad algorithms class an am-I-ready-for-this? quiz.  I promised to report back, and I’m already a little late on that.  The average for the quiz was ~ 70% – I was hoping for a higher average, given how easy the quiz was (in my opinion).  Two students did not take the quiz (and have dropped the class) and two students (who were in the bottom 10%) did drop the class; so perhaps the quiz had the intended effect.

I am more interested in hearing what the students in my class have to say, though.  So, I’m opening up the comments to them:  Was the quiz useful?  Did you study for the quiz?  How could I make the quiz more useful?  I will try to use this feedback in future years, so please be honest.  Feel free to respond anonymously with a fake email and fake name.

Experiments in teaching: am-I-ready-for-this quiz

I am teaching ‘the grad algorithms course’ for the second time.  It is the first time I am teaching a course for the second time and am excited at finally having the opportunity to fix my previous mistakes.  ‘The grad algorithms course’ is required for all CS Ph.D. students in our department and a prerequisite for any other grad course that I teach.  Last year I had ~30 students.  This year I expected the same, if not less, since I heard that grad enrolment was high last year and low this year.  But no.  First the 35 slots filled up.  Then the 10 slot waiting list filled up.  Then they raised the cap (complete with room change 3 days before term) to 45.  Then the class filled up again.  Cap raise + room change to 49 the day before class.  STOP!

Enrolment has waned back down to 38.  Perhaps at least partly due to my first experiment in teaching, the am-I-ready-for-this quiz.

Last year I was a softy.  Don’t think you have the background for the class?  Give it a try! Come by my office, I’ll bring you up to speed.

I’m not doing that this year.  Sure it will probably save me some time, but mostly I think (hope) it is more fair to the students in the class who do have the background.  So on the first class of the second week, I am giving a quiz on material that is either (a) standard and easy undergraduate algorithms material or (b) very easy for someone to learn in roughly one hour of reading given standard undergraduate algorithms material.  My motivation was from Jeff Erickson’s Homework Zeroes and has the goal of:

  • Formally letting the students know that even though this course may be required for their program and they were accepted to the program, they may need to do some work before attempting the course.
  • Getting the students thinking about algorithms and paging back in their (fond) memories of undergrad algorithms.
  • Getting the students reading material, learning the lingo (particularly if their undergrad courses were not taught in English) before we get into the harder material in the class.

The quiz is next week. I’ll try and remember to report back on how it went.

e-readers compared

I have been testing out various e-readers over the summer and thought my experiences might help save other late-adopters some time.  I started looking into e-readers because I suffer from eye strain from staring at non-reflective screens (laptop, desktop, smartphone).  If I need to read more than one page of a pdf, I will print it.  Often I end up printing a 10-page paper and read 1.5 pages of it.  I was hoping to find something better.  I didn’t even think an e-reader could be used because I didn’t know how the technology worked.  The non-reflective screens really are like paper.  Very easy on the eyes.

My ideal e-reader would allow me to:

  1. Read pdfs of technical papers easily.  In some fields colour is important, but I’ve always printed in B&W, so colour was not a non-starter for me.
  2. Annotate pdfs to make notes on technical papers.
  3. Annotate pdfs to grade student homeworks.
  4. Read books that I have borrowed from my library, thebestlibrary.net, which has many, many books available for download for free!

The e-readers I tried were (in this order): the cheap, basic, entry-level Kobo e-reader from Chapters; the Sony Reader Touch Edition with stylus; the Sony Reader Pocket Edition (no stylus); and the Kindle from Amazon without 3G.

The Kobo was great for reading: like chocolate for the eyes.  It was terrible for transferring files and books to it.  Terrible.  It did not handle pdfs well at all.  Zooming was next to impossible.  Refreshing the screen was very slow.  It could not annotate anything. Returned.

The Sony Touch was better for file and book transfer.  Pdfs were better than the Kobo.  The stylus made annotations pretty easy.  But.  But it was terrible, horrendous, awful to read.  It is a non-reflective screen with a SHINY surface.  This means you need light to read the screen, but that same light causes reflections of yourself, the light, the world on the surface.  I really don’t understand how you are supposed to read it.  Returned.

Presuming that the shiny surface was required to make the stylus work, I tried the Sony Pocket.  Better, but the small screen made reading pdfs difficult.  The refresh time was too slow to make zooming and panning a reasonable solution.  Returned.

The Kindle handles pdfs very well.  The refresh rate is quick enough to handle panning down a small-font pdfs in landscape mode a reasonable method of reading.  I haven’t experimented too much with annotations, but the built-in keyboard works well.  Transferring files is as easy as printing.  You simply email the file to your special kindle email address.  I didn’t opt for the 3G because I don’t intend on buying $10 books that can only be read by me on my kindle when I can read books for free at the library (and support my library along the way).  The screen is as nice, if not better, than the Kobo.  I’m discovering new uses for my Kindle, such as emailing my lecture notes to it and teaching from it.  There is also a web-browser that I have yet to really experiment with.  I am hoping to be able to read my rss feed on it.

The biggest disappointment with the Kindle is that it does not handle the EPUB format that is used by libraries to lend e-books.  I could work around this, but it would require at least four steps (including one illegal step of stripping the copyright) to get the book from my library to my Kindle.  Not worth it.

I’m keeping the Kindle because it seems useful for work and that was the main motivation for purchasing it.  Hopefully Kindle will start supporting EPUB, but I won’t hold my breath.