Well, not literal ghosts, but blank spots. It seems we may be facing our first serious bandwidth issues with 28 cameras installed and plenty of summer visitors. Whatever the reason, we’re getting hiccups in our getalongs – cameras are randomly freezing for a few seconds to several minutes each, losing connection with the system, and generally not behaving correctly.

Today, for example, we were collecting images of ourselves from both the video cameras and a still digital camera for comparison of performance for facial recognition. As Harrison, Mark, and Diana moved from right to left along our touch tanks, only one of three close-up “interaction” cameras that they stopped at actually picked them up. It’s not a case of them actually moving elsewhere, because we see them on the overhead “establishment” cameras. It’s not a case of the cameras not recording due to motion sensing issues (we think), because in one of the two missing shots, there was a family interacting with the touch tank for a few minutes before the staff trio came up behind them.

This morning I also discovered a lot of footage missing from today’s feeds, from cameras that I swear I saw on earlier. I’ve been sitting at the monitoring station pulling clips for upcoming presentations and for the facial recognition testing, and I see the latest footage of some of the octopus tank cameras showing as dimly lit 5 a.m. footage. It’s not a problem with synchronization, either (I think): the corresponding bar image on the viewer that shows a simple map of recording times across multiple cameras shows blanks for those times, when I was watching families on them earlier today. However, when I look at it now, hours later, there don’t seem to be nearly as big of gaps as I saw this morning, meaning this mornings viewing while recording might have just delayed playback for some of the recent-but-not-most-immediately-recent footage at that time, but the system cached it and caught up later.

Is it because some of the cameras are network-powered and some are plugged in? Is it because the motion sensitivity is light-sensitive, wherein some cameras that have too much light have a harder time sensing motion, or the motion sensitivity is based on depth-of-field and the action we want is too far afield? Maybe it’s a combination of trying to view footage while it’s being recorded and bandwidth issues and motion-sensitivity issues, but it ain’t pretty.

I’m back to the sales calls, this time for Video Management Systems, the back-end software that will coordinate all our cameras. This field seems more competitive than that of eye tracking, or maybe there is just more demand, as VMS is what runs your basic surveillance system you find anywhere from the convenience store to the casino. So people are scrambling for our business.

However, whenever we try to describe what we’re doing and what our needs are, we run into some problems. You want to record audio? Well, that’s illegal in surveillance systems (it’s ok for research as long as you get consent), so it’s not something we deal a lot with. Don’t mount your camera near a heating or cooling vent or it will drown out the video. The microphones on the cameras are poor, and by the way, it doesn’t sync correctly with the video – “it’s like watching a bad Godzilla movie,” said the engineer we spoke with this morning. You want to add criteria to flag video and grab certain pieces? Well, you can’t access the video stream because if you do, then it’s not forensically admissable and can’t be used in court (Ok, we just need an exported copy, we’re not going to prosecute anyone even if they chew gum in the Visitor Center). You want to record high-resolution images? Well, you can either buy a huge amount of storage or a huge amount of processing capability. Minor obstacles, really, but a lot of decision points, even more than eye trackers. Again, though, it’s a learning experience in itself, so hopefully we’re generating some data that will save someone else some time in the future.

The pricing and purchasing is a bit strange, too. The companies seem to all have “sales” teams, but many can’t actually sell anything more than the software, some don’t even sell their software directly. Instead, we have to deal then with retailers and sometimes “integrators” that can sell us hardware, too, or at least specify requirements for us. Then there’s the matter of cameras – we haven’t decided on those, either, and it’s becoming clear that we’ll have several different types of cameras. Juggling all these decisions at once is quite a trick, literally.

At least it’s a moderately amusing process; many of the sales folks are here or were visiting in the Northwest recently, and we’ve commiserated over the last week about all the rain/snow/ice that ground the area to a halt from Seattle to Eugene.

 

I found this interesting New Scientist piece the other day:
http://www.newscientist.com/article/dn21040-avatars-with-your-body-language-get-your-point-across.html
As these sorts of technologies become more common and affordable, what does this mean for interactive exhibits and remote visitor observation? Are people more comfortable with the notion of being “watched” by cameras today than they were 10 years ago?