Category Archives: Silent Glen Speaks

Announcement: n-th Annual Combinatorial Potlatch 12/11/10 at Western Washington

Last year I spoke at the n-th Annual Combinatorial Potlatch and had a blast.  The informal workshop is a great idea; I wish there were more regional events like them.  Maybe there are and I just don’t know about them.  The n+1-st Annual Combinatorial Potlatch has been announced – I encourage you to go!

This is the first announcement of the n-th Annual Combinatorial Potlatch, which will be hosted by Western Washington University on Saturday, December 11, 2010 in Bellingham, Washington.

We are close to confirming three speakers for the day’s talks and will soon post a webpage with more details and information that will help you plan your visit.

Main Potlatch Page: http://buzzard.ups.edu/potlatch/index.html

There is no advance registration required, nor any registration fee.  The first talk will be mid to late morning, to allow for travel, followed by a no-host lunch, and two talks later in the afternoon.  Many participants choose to stay for dinner locally.

Combinatorial Potlatches have been held for many years at various locations around Puget Sound and southern British Columbia, and are an opportunity for combinatorialists in the region to gather informally for a day of invited talks and conversation. While most who attend work in, or near, the Puget Sound basin, all are welcome.

Program Committee: Nancy Neudauer, Pacific U
Communications Committee: Rob Beezer, U of Puget Sound
Local Arrangements Committee: Amites Sarkar, Western Washington U

Experience Theory Project and Prezi

I’m participating in the Experience Theory Project at the University of Washington right now.

ETP is an event for undergraduates with talent and interest in theoretical computer science, sponsored by the theory groups at the University of Washington and Microsoft Research. The purpose of the event is to exchange ideas about exciting results and research directions in theoretical computer science, and enjoy Seattle in all its summer glory.

Anna Karlin is organizing and asked me to join in.  It’s a great group of students and a great excuse to spend some time at UW and MSR.  There have been about 15 technical talks aimed at senior-year, bright undergrads.1 In an attempt to minimize the time I spend designing talks, I used a new presentation method – Prezi.  I’ve used it before and was perhaps a little too excited about it. As critics would point out, I didn’t take full advantage of Prezi:  I designed this poster2 in Illustrator with a tablet, exported to Prezi as swf and then added “frames” in order to navigate around the poster.  Why?  Prezi was frustrating to work with, figures would have to be uploaded one by one, jpegs cannot be uploaded in high enough resolution to zoom in on, uploading a pdf plain failed.  I was worried about (lack of) reusability.  At least now I have a reusable poster.

There were a few technical issues: I couldn’t use a remote to run the presentation; I forgot to manually set the “sleep” time on my laptop (accustomed to this being automatically set by Keynote); manual navigation sometimes did unexpected, non-deterministic things.  Prezi was cute and flashy, but I wouldn’t use it for a longer talk or a more technical talk.  Overall, I wouldn’t recommend it.

1 The level of the talks, at least on topics of which I am the least informed, was perfect for me – a gentle introduction.  Disconcerting, maybe.  Perhaps I am just too slow to follow the standard (short) technical talk in our field.
2 Which I hope to print out on a plotter and put up in my department. Look! Pretty pictures!

Lessons on writing conference reviews

I’m on the SODA 2011 program committee and finding it, as everything this last year (grant writing, NSF panelling, grad student advising), a condensed learning experience.

Our reviews are almost due which means that this newbie has been through all but a a few of my stack. I asked for subreviews for roughly half my stack and [drum roll] incredibly, despite the procrastinating reputation of computer scientists, my wonderful subreviewers returned all their reviews in to me on time (and several were early).1

The most valuable aspect of these subreviews was not time-saving or expert-opinion-injecting (although that was greatly appreciated), but the stack of example reviews. Sure I’ve seen my share of reviews. But they were all reviews of my papers which has a certain bias. And sure, I’ve seen several reviews of other papers, but they were all reviews written by me. So, after reading this stack of reviews, I can only conclude that I have been a crappy conference subreviewer.

The review should obviously impart an opinion on the paper. But the ideal conference subreview should give enough details of the result to save the program committee member from having to read the paper (or at least, the entire paper). I’m pretty sure I’ve missed this idea writing subreviews in the past; I could cop this up to having never been taught how to write a review but I think it is more a failure of my ability to empathize with the program committee. I imagine I modelled my reviews on reviews of my own papers – in cases where such details were provided, I probably thought “This is boring. I know the results of my own paper, why is the reviewer telling me this?” – when they weren’t, I thought “why was my paper rejected?”2. Embarrassing that it took so long for me to understand this process: subreviews are targeted (mostly) at the program committee member and (direct) reviews are terse because of the large paper load and the ability of a committee member to make an opinion without taking the time to write out a thorough review.

I think a nice addition to conference reviews would be the equivalent of the grant panel summary. Perhaps not so important for accepted papers, but for rejected papers it would be useful for authors to receive high-level constructive criticism (submit to conference X, fix these issues and it should get into conference Y, etc.) Of course, I don’t want to increase the work load of a program committee. At least not this year.

1 One review was a day late but, after all, I could not process all the subreviews in one day, so I would hardly notice or mind.
2 Funny, I don’t have a problem when a paper is accepted!

The Watermelon Problem

Suppose you have gone bicycle camping and carried a small, but hefty, watermelon to enjoy by a fire1. Unfortunately, the watermelon is too big to eat in one sitting. And, unfortunately, since you don’t own any one-time-use plastic, you didn’t bring anything with which to protect a cut side.  Cleverly, you think “Can I cut2 this watermelon so that I can put the remnant pieces together such that the inner fruit is protected?”. Making two cuts along planes equal distances from and parallel to any plane of reflective symmetry will do the trick3: eat the middle piece and seal the two remaining pieces together.  This has the nice property that you can choose to eat exactly the amount of watermelon you want to eat – assuming the watermelon has a plane of reflective symmetry.

Question One: Are there other ways in which this can be done? One could repeat the above procedure until there are no reflective planes of symmetry left. If the watermelon is an ellipsoid, you can certainly repeat the above for each of the three planes of symmetry, but is there a more complicated (ingenious?) way? Is there a way to create pieces that lock together?  Presumably when taking away that first slice, you could create a zig-zagged cut that would lock into itself.

Question Two: Suppose you have already started cutting and eating the watermelon before you realize that you will run into the problem of protecting the leftovers.  Are there remnants that cannot be rescued, that is, that cannot be cut to fit together nicely in a protective cover of rind (while eating only a tiny fraction more of watermelon)?

By the way, throwing watermelon rind on a fire really kills it.

1 That you started with kindling you made with the hatchet you also carried.
2 You don’t have to use the hatchet; you may use a swiss army knife.
3 Props to a founder of Mind the Health Gap for pointing out this solution (worded slightly differently).

Summer undergraduate research projects in theory

“In theory” as in “in theoretical computer science”.

I am lucky to have a student through the CRA-W Distributed Research Experiences for Undergraduates program. Anna Harutyunyan joins me for 10 weeks from Utah State University.  I think it might be more of a learning experience for me than Anna (although my opinion is biased) and I appreciate Anna’s patience through my own growing pains as an advisor.  Hopefully there haven’t been too many pains.

Anna is working on a generalization of the string alignment problem.  I have an idea for an algorithm, and I have an idea of how one might analyze that algorithm, but it uses tools in which I am not so well versed.  In addition to reading up on these tools, Anna has implemented the algorithm.  This is not something I am in the habit of doing, but it is very satisfying to see an algorithm “work” when you are stuck on how to analyze it.

That said, my expectations for “proving something” with Anna are low – how does one prove something in 10 weeks?  With a new project, I feel that the chance of proving something in such a short amount of time is next to impossible.  With a project well underway, there is a much better chance, but there is a lot of start-up time involved in learning the state of affairs.  So I’m torn as to whether to start a new project with a summer student or include them on parts of an existing project.  The former must give the student a stronger sense of ownership over the work; the latter a better chance for the feeling of accomplishment.

Has anyone out there had luck or have advice on picking theory topics for research projects?

While my main goal is for Anna to have a positive experience this summer, at the very least I am having a wonderful time.  Anna has had some wonderful ideas that I know would not have dawned on me – it’s exciting!  I can’t wait to exploit educate more young minds.

Journals ranked by turnover times: now with colour!

Based on David’s link to the AMS data on journal backlogs in my last post (thanks Dave!) and the ISI Web of Knowledge citation report, I’ve wasted some time making the following fancy graph. There are some obvious missing journals that I didn’t have the data for: Theory of Computing (no impact factor), JACM (no backlog times), etc. If you have this data, I would be happy to add them.

Right now, the plot shows time from submit to accept against the impact factor (IF) with journal’s coloured by publisher and size indicating their volume by number of articles. All data is for 2008. It’s interactive! Switch to the 5-year impact factor! Fun!

So, now, I know that impact factors have little meaning in our field. I’d be happy to switch to some other more meaningful ranking. Feel free to comment your suggestions.

But what do you think: would you actually not submit to the SIAM Journal on Discrete Math based on this?

[iframe http://oj0ijfii34kccq3ioto7mdspc7r2s7o9.spreadsheets.gmodules.com/gadgets/ifr?up__table_query_url=http%3A%2F%2Fspreadsheets.google.com%2Ftq%3Frange%3DB2%253AI13%26headers%3D1%26key%3DttBltOeX1ZK-992JA1GeOiA%26gid%3D4%26pub%3D1&up_title=Journal+wait+times&up_initialstate=%7B%22duration%22%3A%7B%22timeUnit%22%3A%22Y%22%2C%22multiplier%22%3A1%7D%2C%22nonSelectedAlpha%22%3A0.4%2C%22yZoomedDataMin%22%3A6%2C%22yZoomedDataMax%22%3A17.7%2C%22iconKeySettings%22%3A%5B%5D%2C%22yZoomedIn%22%3Afalse%2C%22xZoomedDataMin%22%3A0.421%2C%22xLambda%22%3A1%2C%22time%22%3A%222008%22%2C%22orderedByX%22%3Afalse%2C%22xZoomedIn%22%3Afalse%2C%22uniColorForNonSelected%22%3Afalse%2C%22sizeOption%22%3A%227%22%2C%22iconType%22%3A%22BUBBLE%22%2C%22playDuration%22%3A15000%2C%22dimensions%22%3A%7B%22iconDimensions%22%3A%5B%22dim0%22%5D%7D%2C%22xZoomedDataMax%22%3A2.336%2C%22yLambda%22%3A1%2C%22yAxisOption%22%3A%223%22%2C%22colorOption%22%3A%222%22%2C%22showTrails%22%3Atrue%2C%22xAxisOption%22%3A%225%22%2C%22orderedByY%22%3Afalse%7D&up__table_query_refresh_interval=300&url=http%3A%2F%2Fwww.google.com%2Fig%2Fmodules%2Fmotionchart.xml&mid=4&nocache=1&synd=spreadsheets 550 450]

(The above works for me on Safari; I’m not sure how the gadget will work under other browsers. If you can’t see the embedded gadget, try this published spreadsheet.)

Update: I forgot to “give props” to Hans Rosling and GapMinder.org for popularizing these graphs.  The graph was created in Google Spreadsheets using the “motion graph” gadget.

Update: JACM added thanks to Dave pointing me to JACM’s self-reported backlog. It is also nicely consistent with the impact factor/wait time correlation.  I’d like to comment more on this in a later post: I don’t think this happens in other fields.

Journals ranked by turnover times?

I had a search of the blogs and web at large to see if there was any evidence (anecdotal or otherwise) about the turnover rates for TCS (and friendly) journals.  Short answer: I couldn’t find much.  I would (and I am sure many other people would) appreciate any help in deciding what journal to submit to if you are particularly in favour of short turnaround times.  Of course, I am sure most would also not like to sacrifice quality – at least not too much.  Thanks in advance!

Adaptive analysis

Adaptive analysisJérémy Barbay was visiting me this week from Universidad de Chile.  Although we overlapped at Waterloo by a few months, we had never talked in depth about research before.  His visit was great timing to scoop me out of some research doldrums after a stressful winter quarter.  He gave a great talk on adaptive analysis.

As we all know, algorithms often out-perform their worst-case analysis.  There are a few theoretical tools for explaining this behaviour: think O(n log n) average case v. O(n^2) worst case for quick sort and poly-time smoothed analysis v. exponential-time worst case for the simplex algorithm.  In adaptive analysis, we analyze an algorithm in terms of some (problem-dependent) hardness parameter, h.  As an introductory example, consider insertion sort.  If the input array is sorted, insertion sort takes O(n) time.  If the input array is in reverse order, insertion sort takes O(n^2) time.   If there are h pairs of elements that are mutually inverted, then insertion sort takes O(n+h) time: the running time depends on how hard a particular input instance is to sort.

Adaptive analysis has appeared several times in the past, but the word adaptive might not have been used.  Jérémy would be a better person to provide a list.  The most common examples seem to involve the analysis of existing algorithms.  I would be most interested in the lens of adaptivity informing new algorithmic ideas, particularly those that would outperform existing algorithms, at least on interesting or useful inputs.  Is there a collection of examples of such results?  I know Jérémy mentioned one or two, but I’ve since forgotten in the whirlwind of whiteboard whimsy.

SODA 2012 to be in Kyoto, Japan

I missed the business meeting to have dinner with a non-SODA-attending friend and so missed the voting over the location of SODA 2012 which was apparently a close tie.

I’m a little dismayed at SODA being outside of North America.  As a graduate student I would have probably been excited in my responsibility-free state.  But now I’m thinking “How much is this going to cost? How can I afford to miss what will probably end up being a full week of teaching?  I’m going to go all that way to just go to the conference and not be able to travel? How are our grossly underfunded faculty and grad students going to afford to go?  Would I justify going if I don’t have a paper?”

SODA is my favourite conference.  And there’s no other conference like it in North America.  Going without it for a year would result in some withdrawal.

SODA 20 minute talks

Many people have been blogging on the technical content at SODA, but I won’t. Given that David has already hinted that I only value the first 10 minutes of most talks, clearly I’m not in the position to expound on the more than the definition of problems and all but the highest level of analysis.

I’ve been thinking about what I like about conferences. Of course I appreciate meeting wih friends and colleagues – working on new and old problems. I do enjoy the talks too. But for me, the 20 talk is problematic. I can only imagine two possible uses of 20 minutes: an advertisement to go read the paper, to educate people of the definition of the problem/topic/solution statement, or to actually go into technical details.

For topics that are directly in my area, 20 minutes are too short to delve into any technical details for which I would have questions. Nor do I need an advertisement. I am probably already aware of the paper (thanks archiv and its users) and perhaps already read the paper.

For topics not in my area 20 minutes is probably too long for an advertisment and too short for me to absorb definitions in order to appreciate any technical content.

That said, I miss theory seminars. I am the only traditional TCS person at OSU and am too far from theory strongholds to attend a theory seminar. I would love to get that content from a conference. The plenary talks provide a little of that, but they are not usually on recent results of a technical nature (nor would I want that to change).

What I propose is having two types of talks – short 10-15 minute “advertisements” and long 45-60 minute seminar style talks. The committee could choose the best results to give longer slots to. Perhaps (and probably controversially) longer slots could be biased towards better speakers.