What ChatGPT thinks about the Bucks and why it matters for us and you.
The world is hell. Within the world, journalism is hell. And within journalism, sports journalism is hell.
There are myriad factors that help explain each of these levels of hell. But a particularly pressing factor for sports journalism is that it apparently no longer requires sports journalists.
We here at Brew Hoop are not sports journalists in the traditional sense, but the development of generative AI still poses existential questions to us basketbloggers. It also poses questions to you: why read what some myopic schmuck (read: me) has to say about the Bucks when you can ask an entity that can leverage a sizable chunk of recorded history?
To explore this question, I listened to what a myopic schmuck (again: me) had to say about the Bucks, and I asked the most recent version of ChatGPT the same questions. Although I am a horse in this race, I believe that I came out on top—not due to any particular expertise, but rather by virtue of being human.
Unfortunately, a perfect randomized control trial was doomed from the get-go: Riley dared to ask me how I was doing, and it felt awkward for me to ask the same of ChatGPT.
Yet, this false start yielded a few takeaways. For one, I started out strong with borderline misinformation: there is, in fact, a Trail Blazers vanity plate here in Oregon. Humans make mistakes… although you’ll see soon enough that we’re not the only ones. And notably, we’re self-aware enough to admit fault.
Otherwise, our five minutes of banter reflected shared knowledge (e.g., I moved to Oregon) and a shared network (e.g., Former BO$$MAN Mitchell Maurer). Our shared background facilitated the conversation, whereas the lack thereof likely contributed in part to my awkwardness starting a conversation with ChatGPT (alongside it, um, not being human).
Now let the RCT begin!
Which new Buck are you most intrigued by?
I boldly answered AJ Johnson, due to his lateral and “whatever the front/back version of lateral is” quickness. I attempted to be humorous (the Bucks are “prehistorically” old) and salty (“for the three minutes that Doc will play him every game”). I also likened the season preview quiz to the quizzes that I would be giving in real life this fall (as a teacher).
ChatGPT, on the other hand, went with… Dame:
The actual writing is decent, if a bit bland. However, the main concern is that Dame doesn’t fit the question: he may be relatively “new,” but in this question “new” means “hasn’t played for the Bucks before.” Since ChatGPT (interestingly) asked me a follow-up question, I used it to clarify the question:
The goal of my prompt was two-fold: 1) indicate that Dame arrived last year and 2) specify that I was interested in its thoughts on Bucks that arrived this offseason. ChatGPT’s answer of Malik Beasley—who is not even a Buck anymore!—makes it clear that its knowledge is not up-to-date enough to include the most recent offseason. What’s worse, it doesn’t even imply awareness that it is out-of-date, even after I hinted at that in the prompt. In a world that is hyperfocused on the present, that’s a problem. The actual writing is again decent, although it seems to imply that playing with Jrue (…in training camp?) would noticeably improve his defense.
Which young Buck (third year or less) will have the greatest impact on this season?
I answered AJ Green, with a nod to Doc’s trust in him, three-point shooting, and miraculously being a non-negative defensive player. It was a pretty basic answer.
ChatGPT spun the wheel and ended up with… MarJon Beauchamp:
The lack of up-to-date knowledge really burns ChatGPT here. It is crazy to think that it was only a year ago that MarJon ranked ninth—ninth!!!—in our annual Ranking the Roster series, a two-step uptick from eleventh the previous year. He beat out AJ Green (eleventh, up from seventeenth), AJax (thirteenth), and Chris Livingston (fourteenth). This year, MarJon fell to fourteenth, behind all three of those guys, as well as a guy who hasn’t even touched an NBA floor (AJ Johnson), and he is on the trading block to open up a spot for a guy who literally cannot walk. Woof.
In terms of the content, it was a reasonable glass-half-full take on the MJB experience. The phrase “a more established playoff-tested player” seems to hit the same nail twice, though. Similarly, the broader phrase “if he shows enough promise early in the season, he could indeed become part of a trade package for a more established playoff-tested player” is awfully similar to “if Beauchamp can increase his trade value, it could open up the possibility for the Bucks to land a solid veteran who fits seamlessly into their playoff rotation.” I guess this idea came up frequently in the MarJon corpus, lol.
These instances of repetition illustrate that ChatGPT doesn’t really remember what it has already said. Moreover, the general format of its answers—typically an answer, two paragraphs and a question (note the lack of an Oxford comma, @lohausfan)—is starting to wear on me. The second case of repetition seems to help it flesh out the second paragraph in its second response (#second).
Rank the new free-agents signings in how well they will fit the role they’ve been brought to fill.
Welp. I imagined that this question, in particular, would have flummoxed ChatGPT, so I didn’t even ask it. Meanwhile, I disobeyed the question by refusing to rank them, instead offering the “insight” that each of them fills their roles on paper: Delon Wright is an adult, Taurean Prince is a wing, and Gary Trent Jr. is a fifth starter.
Rank these players in terms of their importance to the season: Damian Lillard, Khris Middleton, Brook Lopez.
Here I disobeyed again, opting to rank Dame first but keep Khris and Brook tied. My rationale was that Middleton and Lopez are known entities at this point in their tenures with the Bucks, whereas Dame remains a partial question mark on the back of last season.
In contrast, ChatGPT followed the rules:
Again, content-wise, it’s a reasonable response. Its knowledge base isn’t so outdated that it still thinks that Brook camps inside on offense. ChatGPT also figured out the cadence of the interview, so it spared itself the indignity of me cutting it off by ending with “Shall we move on to the next question?”
Still, there are some head-scratchers. It made a grammatical mistake in describing Middleton as “Milwaukee’s go-to wing player, a reliable scorer, and shot creator” (something that we humans, especially here at Brew Hoop, never do…). I don’t understand its statement at the end beginning with “The pecking order…” Does Lillard have to be the offensive “engine” for the Bucks to compete “at a championship level?” I seem to recall they won a championship without him. And the repetition rears its head again, with ChatGPT just as excited about Brook’s “newfound” three-point shooting ability as national broadcasters are: not only does he have the “ability to stretch the floor with his three-point shooting,” but he also has the “ability to hit threes will help space the floor.”
What do you think the team’s greatest weakness currently is? (Could be specific to something on the court, lineups, wider concerns, etc.)
For me, it was the guys five through eight in an eight-man playoff rotation. I’m confident in the top four, but concerned that the next guys are either too old (Pat) or too young (the AJs) to be legitimate playoff role players. However, I was remiss not to talk about the new acquisitions, who will potentially comprise three of these four slots.
I’m sorry to report that ChatGPT had no problem identifying the team’s greatest weakness. In fact, based on its rampant bolding, it appears that it identified four. I guess it heard about me disobeying a question and wanted to get in on the action:
If you can overlook the references to Old Friends and, more damningly, the incorrect usage of “while,” what you’ll see is that these weaknesses start to blur. Although Khris Middleton is bolded and marks a new paragraph that starts with “Additionally,” it appears to be a continuation of ChatGPT’s concern about perimeter defense. Taken together, those two paragraphs then reference its other two concerns: depth (“the team will likely have to rely more on players like MarJon Beauchamp or Jae Crowder”—deranged) and injuries (“if Khris Middleton… needs time to recover from past injuries”). Then the depth paragraph references injuries, and the injuries paragraph references Middleton specifically. It’s a hall of mirrors that could be entirely avoided by cutting the third and fourth paragraphs, which would obey the question to boot. Moreover, it clearly cares about perimeter defense more than the other concerns, as it forms the basis for its questions.
Based on its response, I thought the news that we signed Delon Wright would be music to its ears, so I shared that in my response to its questions:
I thought the “Memory updated” text might signify that ChatGPT would remember that Delon Wright is a Buck now, but it actually meant that it remembered that I’m a Bucks perimeter defense truther (due to Delon Wright and AJax’s hoped-for development). So I suppose it wasn’t surprising that its follow-up answer felt like saving face.
Fill in the blank: Something you hope to see the Bucks do during the regular season is __ (i.e., what would make watching 82 games worthwhile).
LET. BOBBY. COOK. You heard me in the roundtable and you can hear me at 28:00 in the pod. I’ll say it one more time: let him cook.
ChatGPT is apparently against Bobby finally winning the Sixth Man of the Year award, because (for some reason) it answered differently:
I suppose you could technically fill in the blank with however many answers you want, but I’ll still chalk this up as another instance that ChatGPT disobeyed the question.
The answer seems to be the initial bold text, but the second paragraph only addresses the second part of it (“new offensive sets around Lillard and Giannis”). It’s a decent paragraph, although I don’t know if I would count “playing faster” as a “combination.” I suppose the “experiment with different lineups” is covered by the next paragraph, although it only focuses on individual players. Interestingly, the three players are initially labeled as “new,” even though MarJon was not at the time, before the broader “younger or newly acquired players” is used in the next sentence. Again, I’m not sure how if ChatGPT remembers what it has already written.
Fill in the blank: In order for the Bucks to win a title, they must ___.
I recycled my previous answer here by highlighting the importance of the latter half of the eight-man playoff rotation, with honorable mention to unlocking the Giannis-Dame pairing. Calling back to a previous response explicitly differentiated me from ChatGPT, for better or for worse. And I suppose I technically disobeyed the question by not limiting myself to a single answer, although I stressed the five through eight and likely only provided another answer to not fully recycle my previous response.
At this point, ChatGPT is starting to reach saturation, as are my critiques of it:
You get the drill. Lots of bold. Giannis and Dame (hey, I agreed!). Injuries. Defense (“especially on the perimeter”)—even though it wasn’t mentioned in the first, highly bolded sentence. Multiple answers. Paragraphy format.
But wait—there’s more!
Predict the team’s finish in the East and how far they go in the playoffs.
I offered a measured take that the Bucks would finish third in the East (behind Boston and New York and CERTAINLY ahead of the Sixers), taking them to the Conference Semi-Finals where they would lose to the Power of (now somewhat diminished) Friendship. In short, a slightly better version of last year.
ChatGPT said “hold my drink:”
REJOICE! They’re going back to the Finals, baby, with a chance to win the darn thing!
Alas, ChatGPT’s perspective is last offseason, when this was a more reasonable take. This year, a little less so.
In terms of the actual writing, it’s a little dry. You don’t get a sense that it’s drawing on knowledge of other teams in making its assessment. In fact, it seems to suggest that the Bucks could meet the Celtics in the Finals, although I’ll give it the benefit of the doubt. On the plus side, it was aware that it was the last question and pivoted fittingly.
But of course, regular listeners of the pod know that we conclude with a guest-specific segment, usually based on their favorite hobby. I spent an inordinate amount of time rambling about echidnas in “Where in the World is Morgan Ross?” I might have thought it awkward to ask ChatGPT how it was doing, but apparently, I had no issue asking it to come up with a segment based on one of its apparent hobbies.
Check this out:
There’s a lot to unpack here. “I’m all about diving deep into sports analysis?” Cringe. To its credit, it was a fun suggestion. Less fun was describing Chris Bosh as articulate. I suppose Jaylen Brown makes sense, but there’s a fine line between “known intellectual” and “Kyrie Irving.” (ChatGPT’s second response starting with “Haha, yeah” after I commented on the pick was top-tier.) ChatGPT asked me twice if I would make any changes, though, so I suppose it’s on me for not taking it up on that.
This set of responses also highlighted something I’ve noticed about ChatGPT in my teaching. Let’s play a game: count the number of “groups of three” (e.g., apples, bananas, and carrots) in the above passages. I counted four:
- cooking, acting, or even solving puzzles
- well-read, articulate, and has interests in various fields
- sports, entertainment, and culture
- intelligence, curiosity, and diversity of knowledge
The pattern is frequent enough that I doubt that the actual content carries much meaning. Rather, it commits to the format and then picks words to fill it.
Throughout this article, I hope to have shown how ChatGPT provided decent responses to our season preview while falling short in several ways. Although it and other generative AIs will doubtlessly improve, the pace of these improvements is not as fast as its stakeholders make it seem (it’s almost like they have financial incentives to suggest that it is growing exponentially.) Many of these issues cut to the core of the challenge that this technology faces: it is hard to write like a human—or at least, in our case, a basketblogger.
But I want to avoid solely making the argument that “humans are better at sports journalism than generative AI,” because this argument will weaken over time (even if it is never truly rebutted). To broaden the argument, let’s look at the end of the interview with ChatGPT:
Of the entire interview, the last phrase left the deepest impression on me: “I’ll be rooting for Milwaukee this season!” It shook me because, more than any of its analysis—including its analysis based on outdated information—it is demonstrably false. It will not be rooting for Milwaukee this season. Rooting for the Bucks—the uplifting, crushing, embodied experience of fandom—is in the purview of humans. It isn’t enough to say that one is rooting for them, like ChatGPT can. It’s a way of life, a life that ChatGPT doesn’t experience.
Within the hellish world, the hellish world of journalism, and the hellish world of sports journalism, Brew Hoop is a haven for thoughtful writing about the Bucks. But it’s more than that, it’s a haven of Bucks fans. Even if you disagree with some of our writing (see: my case for Bobby Portis), you know that you’re part of a shared experience.
I hope you’ve enjoyed this piece, but even if not: I’ll be rooting for Milwaukee this season.