

You can reason from a few principles:
- At its core, the math functions being optimized by these AI tools and their specialized hardware is that they can perform inference and pattern recognition at huge scales across enormous data sets.
- Inferring a rule set for pattern also allows generation of new data that fits that pattern.
- Some portion of human cognitive work falls within the general framework of finding patterns or finding new data that fits an old pattern.
So when people start making claims about things with clear, objective definitions (a win condition in chess, the fastest route to take through a maze, a highest lossless compression algorithm for real world text), it’s reasonable to believe that the current AI infrastructure can lead to breakthroughs on that front. So image recognition, voice recognition, and things like that were largely solved a decade ago. Text generation with clear and simple definitions of good or bad (simple summaries, basic code that accomplishes a clearly defined goal) is what LLMs have been doing well.
On things that have much more fuzzy or even internally inconsistent definitions, the AI world gets much more controversial.
But I happen to believe that finding and exploiting bugs or security vulnerabilities falls more into the well defined problem with well defined successes and failures. So I take it seriously when people claim that AI tools are helpful for developing certain exploits.
That’s just an awkward sentence construction but it makes sense: they released track via Bittorrent. The tracks were scraped from Spotify.
I sold my car that was purchased from a dealership via private party sale.
I charged my laptop that normally accepts 100W via a 20W phone charger.
I would’ve used a “which” phrase with commas to avoid the confusion, but the sentence as written is valid and makes sense.