Brandon Shoop site iconBrandon Shoop

AI Ranked Content

By Brandon Shoop

While trying to navigate my day to day, I had a moderately disturbing thought: Is it dangerous to assume an AI naturally rewards true forward-looking content?

Generally, I am pretty bullish on the positive impacts that Artificial General Intelligence will have on society. I do have my reservations, mostly around how our tools teach us to think in terms of that tool. Or, said differently, relying too much on AI to solve our problems reduces our ability to think more broadly than the limited solutions AI can propose.

How can AI engines reward forward-looking content? I feel like this is contrary to their core existing of being trained on what has come before. Today's AI systems are trained primarily on historical data. Despite that, they can reward or generate forward-looking content. Why isn’t this a contradiction, and what mechanisms are emerging to improve it?

Models are basically fancy pattern matchers. So, given a historical trend, it can evaluate things like cause and effect. It can see: how markets respond to shocks or how technologies tend to diffuse. This reads to me more like it can extrapolate generalizations rather than reward actual novel content. "Forward looking" actually becomes "forecasting".

Models built on historical data must inherently have blind spots, especially for discontinuities, "black swans", and paradigm shifts. AI cannot predict events that have no historical analogues (e.g. Instagram, Bitcoin, or even Transformer Architectures). None of these would be predictable from trend curves or historical analogies. They were step changes, not linear extrapolations. Even when we "reward" AI for forward-looking content (e.g., through RLHF -- reinforcement learning from human feedback -- or prompts), this is not actual innovation. It is stylistic mimicry of how humans talk about the future

What does this mean for us? Are we confusing plausibility with prediction? Misinterpreting projection as prediction? Does it produce seemingly forward-looking narratives but not forward-looking truths?

Can we use AI safely for foresight or are we barreling down a narrow path that limits how humans think?