This really clarifies that AI optimizes the compression pathway the system uses to decide what matters. When the wrong variables are compressed early, downstream “fair” or “efficient” fixes can only amplify the error. Systems thinking is what keeps compression aligned with reality rather than dashboards.
Spot on. You’ve hit on the danger of early compression. If the initial data reduction is flawed, every efficient fix downstream only amplifies that original misalignment. Systems thinking is our best defense against building models that look great on a dashboard but fail in the real world.
Marcela, thank you for another wonderful post (and as ever for providing the references!). I see this lack of systems thinking as completely insidious in the culture of universities. No one is responsible and yet everyone is to blame. This is why AI adoption has been such a catastrophe. If only we could work outside of our silos and consider the system as a whole. 😢
You are right Sam, part of me thinks it’s because these things take time, and any administration or management team, whether in a university, a company, or the public sector, wants to show quick wins. Nobody wants to invest in something sustainable for the long haul.
This really clarifies that AI optimizes the compression pathway the system uses to decide what matters. When the wrong variables are compressed early, downstream “fair” or “efficient” fixes can only amplify the error. Systems thinking is what keeps compression aligned with reality rather than dashboards.
Spot on. You’ve hit on the danger of early compression. If the initial data reduction is flawed, every efficient fix downstream only amplifies that original misalignment. Systems thinking is our best defense against building models that look great on a dashboard but fail in the real world.
Reading this reminded me of this YouTube channel, which focuses a lot on unintended consequences and second-order effects: https://www.youtube.com/watch?v=ImJSMqgyvCY&list=PLBuns9Evn1w9XhnH7vVh_7C65wJbaBECK
It seems very similar to what happens with AI and how it’s implemented in real systems.
Maybe it’s worth adding one more question when integrating AI: what could be the possible unintended consequences?
Great article, Marcela!
Thanks Daria. I guess it s important to add that question
What could possibly go wrong?? 😆
My concern too- that we optimize AI to support the corrupted systems we currently live under. Rot on steroids.
Sadly you are right
Thank you for the insightful piece, and I think that too much AI use is problematic.
Thanks for reading Erin
Marcela, thank you for another wonderful post (and as ever for providing the references!). I see this lack of systems thinking as completely insidious in the culture of universities. No one is responsible and yet everyone is to blame. This is why AI adoption has been such a catastrophe. If only we could work outside of our silos and consider the system as a whole. 😢
You are right Sam, part of me thinks it’s because these things take time, and any administration or management team, whether in a university, a company, or the public sector, wants to show quick wins. Nobody wants to invest in something sustainable for the long haul.