I remember Google+, and the idee fixe and mad hype around it. (Google was afraid of Facebook.) G+ was a ghost town. But for the first year or so the G+ team reported astronomical engagement numbers. Huhh?
Finally we learned they were counting every G+ notification dropped at the top of Gmail as an "engagement."
Anyway, I wonder how they're measuring this...
https://www.businessinsider.com/google-earnings-q3-2024-new-code-created-by-ai-2024-10
@Mer__edith My engineering friends tell me that tools like CoPilot are almost useless for new coding tasks, but very useful for generating unit and integration tests and can do a reasonable job at creating entire test suites with a few prompting sessions. Makes sense, because those are very structured tasks.
So, if you think of test as 25% of an engineer's work, I can see how this bit of folklore became an actionable metric at the senior management level as it filtered upward.
@karabaic This looks like it may be biased, because many programming languages require far too much ceremony to define a unit test.
If the task is very structured, then it should be possible to abstract most of it away by improving libraries or language design.
AI will make it unnecessary to do such improvements, so I expect that libraries and APIs will be going downhill.
@Mer__edith
@karabaic One more thing:
- more than 25% of our code is written by the IDE (automatic imports, refactoring, Unit Test templates, auto-completion from 3 letters to 20, …)
⇒ no one celebrates
- more than 25% of our code is written by a super-expensive statistical parrot
⇒ AI crowd cheers
If a task is structured, it would be much cheaper to create and share free licensed reliable templates and scripts to do it than to build a general answering machine that usually does the task.
@Mer__edith
@ArneBab @Mer__edith The technology with no use case wants a participation award.
@ArneBab This is one thing I never understand. IntelliJ automates most of my programming already.
I posit, Chat, J'ai Pété does a mediocre job of helping programming tasks with bad tooling.
@holothuroid and if you watch how IntelliJ does it — especially where it fails — you might get to the same conclusion as me, that IntelliJ mostly just uses templates and string-replacement.
Look at extract method when it fails: it’s basically a template copy at a guessed location with visible inputs as template arguments.
Most of the things it does would even be pretty easy to replicate with Emacs, but they made their approaches so robust that they work 99% of the time.
@karabaic @Mer__edith
@holothuroid It’s not that this is cheap: text-work, testing, getting stuff robust, ... are real work and cost money.
But they cost far less than the billions of dollars AI already ate.
JetBrains has 10 billion in yearly revenue and they build a full IDE and other tools for many languages.
https://en.wikipedia.org/wiki/JetBrains
That’s just a quarter of the yearly investment into AI startups (most of which are expected to fail):
https://news.crunchbase.com/ai/torrid-funding-pace-googl-xai-q3-2024/