Goes for 30 tonight, with 7 boards and 7 assists. She seems like the best player on the Valkyries. Should be considered for Most Improved Player in the league.
Think I heard her referred to by one of the TV commentators as "the pride of Northwestern." (Doesn't hurt!)Goes for 30 tonight, with 7 boards and 7 assists. She seems like the best player on the Valkyries. Should be considered for Most Improved Player in the league.
Great to see!
Hate to break it to you - I work in AI. And AI is as biased if not more biased than people in most cases (particularly generative AI). Some types of tech may help, but don't hold your breath on AI helping in real-world, complex tasks like officiating a game.I didn’t watch the game but the Valkyries supposedly got jobbed by the refs too.
When is AI going to fix bad game officiating in the pro and college ranks?
![]()
Nakase calls out officiating after Valkyries lose big in Game 1
Valkyries coach Natalie Nakase called for officials to give her team a "fair fight" after, she said, the officiating tilted in favor of the Lynx in Game 1.www.espn.com
But I bet an AI referee system would've called at least 3 penalties on Oregon last week! Humans are more biased (and blind).Hate to break it to you - I work in AI. And AI is as biased if not more biased than people in most cases (particularly generative AI). Some types of tech may help, but don't hold your breath on AI helping in real-world, complex tasks like officiating a game.
And the training data is based on what humans historically do. With additional bias of a data scientist somewhere being biased in deciding what training data to use.But I bet an AI referee system would've called at least 3 penalties on Oregon last week! Humans are more biased (and blind).
That’s based on how LLMs are used currently - scraping the internet, looking at’s what’s been done, and trying to predict “the next best word” (or pixel or whatever). It can certainly be trained using actual data of penalties so all of these blatant mistakes stop helping cheateAnd the training data is based on what humans historically do. With additional bias of a data scientist somewhere being biased in deciding what training data to use.
Really well said and I agree with you entirely. Would add that those limitations of LLM's are magnified when everyone defaults to Generative AI. LLM's can do a lot more analytically when they aren't attached to a chatbot, eliminating hallucinations in addition to the equally large accuracy problem of false negatives.That’s based on how LLMs are used currently - scraping the internet, looking at’s what’s been done, and trying to predict “the next best word” (or pixel or whatever). It can certainly be trained using actual data of penalties so all of these blatant mistakes stop helping cheate
College & NFL referees should be using augmented reality which should be linked to the video dashboard of the stadiums (and TVs).
AI should also be applied objectively to independent review.
I’ve also been working with and studying AI for decades (Computer Engineering background). We can both agree we’re in the nascent stages, and ChatGPT-5’s marginal improvements over -4 show the limitations of LLMs (there are many).