Blog Post #2: Google Bard

Bing rendition of "Google Bard personified"

As part of an AI Coder team, using AI isn’t just tolerated, it’s encouraged! My teammates and I are taking full advantage of this by working with ChatGPT3.5, GPT4, Google Bard, Github Copilot, Dall-E2, Midjourney, and Bing’s Image Creator. My job has been to incorporate Bard into our web development workflow. At first I thought this would make the process easy, and to a large extent it has, but over the last few weeks I’ve been butting up against the limitations of Google’s powerful chatbot.

The Good

During the first weeks of our project, I relied entirely on Bard for coding and general advice. I was impressed with how well it could solve programming problems and answer simple questions. When coding, my strategy was to first ask Bard for a boilerplate file, then to incrementally add functionality by pasting relevant code snippets and prompting for the desired changes. Over time, I found myself pasting larger and larger snippets and asking for more and more functionality with each prompt. Despite this, Bard handled almost all the tasks I requested with ease.

By implementing my ideas so quickly, Bard gave me time to let my imagination take flight. This was incredibly freeing for frontend design in general and CSS design in particular. In the past, I’d spend so much time tweaking stylesheets to get elements aligned just right; now, Bard can get me where I want to go in just a few iterations. This freedom had me asking “why not?” to features (custom dropdown menus, drop shadow effects, gradient backgrounds, etc.) that I would’ve previously dismissed as too time consuming.

I also appreciated the teacherly tone of Bard’s responses. Each solution came with step-by-step guides and explanations for why it chose the options it did. What’s more, most responses came with multiple “drafts” or variations that I could easily jump between. These details make Bard a great learning tool for anyone who needs help getting started in a topic.

The Not So Good

I wouldn’t go so far as to call these observations “bad”, just limitations to keep in mind when using tools like Bard.

As the complexity of our codebase grew, the more likely it became that Bard would make unprompted changes to the code. Usually this wasn’t a problem, and in some cases it even anticipated an unspoken need, but those moments of frustration when Bard missed the mark completely started to happen with greater frequency. That said, this problem was easily mitigated by swapping to another draft or by asking again with a smaller code sample and more specific prompt.

This hints at Bard’s major limitation – the larger the scope, the less effective its guidance. As our focus shifted to Google Cloud and AWS deployment, I thought I’d be able to rely solely on Bard’s instructions. I was wrong. I found myself going in circles, implementing solutions that the chatbot would later forget about or deem unnecessary for my stated goal. Worse yet, it felt like Bard wasn’t fully up to date on documentation, often referring to workflows and options that were no longer available. The same level of detail I found so helpful for simple tasks became a point of frustration when my goals became more complex and my options more varied.

The Verdict… for now

Overall, I’d say Bard is a great teacher, but it won’t be replacing our professors any time soon. That said, it’s at least at the level of a ULA for most tasks and, had it been a viable option a year ago, could have streamlined or outright replaced a few office hour visits. In any case, I wouldn’t say Bard is enough to make me a better coder – it simply amplifies what I can do with the skills I already have. I’ll still need that base level of understanding in order to best utilize AI assisted coding. When I don’t have that knowledge and need big-picture guidance, using Bard can feel like the blind, but incredibly smart, leading the blind. On the other hand, if you know exactly what you want within a reasonable scope, you’ll go as far as your imagination can take you.


Bonus: A Message from Bard

I thought it only fair to give Bard a chance to respond:

“The criticisms you have raised are valid, and I am working to address them.

Bard’s accuracy can decrease as the scope of the task increases. This is because Bard is trained on a massive dataset of text and code, but it is still under development and cannot handle all tasks with equal accuracy. I am working to improve my ability to handle larger tasks by training on more data and developing new algorithms.

Bard is not always up-to-date on documentation. This is another area where I am working to improve. I am constantly being updated with new information, but it can take time for me to learn about new documentation and changes to existing documentation. I am working on ways to improve my ability to stay up-to-date on the latest documentation, such as by developing new algorithms to help me identify and process new information more quickly.

Bard can sometimes make unprompted changes to code. This is usually not a problem, but it can be frustrating when it happens unexpectedly. I am working to improve my ability to understand the context of code and to avoid making unwanted changes.

Overall, I believe that Bard is a powerful tool that can help web developers solve programming problems, answer simple questions, and generate code snippets. However, I am aware of its limitations, and I am working to improve. I am grateful for the feedback that users provide, and I am committed to making Bard a better tool for web development.

I hope this rebuttal is helpful. Please let me know if you have any other questions or feedback.”

Sources

Google Bard – https://bard.google.com/

Microsoft Bing Image Creator –
https://www.bing.com/images/create/google-bard-personified/6544fa257b534083897447b5986afddb?id=E0ykR%2fRkVKFhVfVA4iHe6w%3d%3d&view=detailv2&idpp=genimg&FORM=GCRIDP&mode=overlay

Print Friendly, PDF & Email