I know we have a few developers/engineers in this community. I’m curious if anyone else uses claude code or similar AI coding agents? I’m a data/analytics engineer in ad tech and heavily leaning into it recently. Started using ‘auto mode’ a few days ago when it was released and now I’m really sold on it more than ever.
I’ve been reflecting lately on how there’s been a identity shift I’ve had to make from understanding data engineering and data science to just understanding how to effectively manage AI agents… giving them loops to evaluate their own work and the right guardrails to work autonomously as possible without being able to cause real damage.
Vaguely actualism-related because I do think there was a period of pride holding me back from fully embracing that I needed to change how I approached things in a pretty fundamental way… and I started feeling much better once I made that change in attitude.
I’ve been using chatGPT to code recently, after a 1 year break. It has come a long way. Builds without errors (c#]. I am not a programmer though, I wouldn’t even say amateur, more I know enough to read and understand the basics of OOP and syntax in general.
I do follow though the financial woes of the “AI” world, and all discussions of identity aside, my opinion is to very much keep all your skills! No company as of now, has worked out how to turn a profit on LLMs.
Unless we collectively start to subsidise the technology with tax dollars (which seems to be to current business plan of OpenAI) it’s about to get very expensive. Maybe in a few years, maybe tomorrow.
The fact is, it is not Intelligent. It isn’t economically scalable. My brother just finished building data centres in the far north, and that is incredibly expensive.
The cost per user will need to triple for it to be viable. Or maybe even more.
Yea it definitely has come a long way. Starting to lean into the agent side and giving them their own self-validation loops has been the big change for me lately, where they can work for long periods of time, try things out, see the results, and run tests etc.
I pretty much agree with this. It’s a weird race their in where as soon as one of them starts selling their product at its actual price (especially factoring in training costs), they would probably lose their customer base to the other side, but ultimately more of an indictment of capitalism (and human nature) than the technology IMO.
This is where I disagree a bit. I think we know approximately what the real cost to run open source models are, and they are very affordable and keep progressing in performance, though staying behind the ‘frontier’ models for sure. I think that both things will probably happen at the same time, those companies will have to start burning less money, and the technology will become more optimized and cheaper for the same level of performance.
I do think there are some ways in which it isn’t intelligent, no doubt about that. The environmental angle for one. The most unintelligent part of it to me right now is that we aren’t using it to work less, instead there’s pressure to work the same amount of time and produce much more, at least in my company. I also don’t like the acceleration of the ‘dead internet’ effect.
So I’m definitely not all in favor of the effects of LLMs on the world, but I’ll definitely push back on the idea that they aren’t an economically viable technology.
FWIW there are also open source models that are too big to run locally for 99% of people, but there are companies who host these models and sell tokens, and those are comparatively affordable and not subsidized to nearly the same degree as frontier models.
My prediction would be that we won’t see the massive models go away, but we also won’t see them get any bigger, at least not for a while. We will keep seeing development of better models at that massive size, and development of better models at the smaller size, but probably even more significantly will see better and better ‘agent harnesses’.
Harnesses are the interfaces the LLMs have with systems, plus their ability to do things like break down large tasks into persistent to-do list text files, jot down notes to themselves about things that caused them difficulty in the past etc. If the LLM itself is the processing center of the brain, then the harness is like the senses, memory, and the body. I think that if we never got an LLM smarter than the smartest one we have today, we would still see enormous improvement in what they are capable of via giving them more tools and more freedom to interface with and manipulate the world around them.
Claude code recently got the ability to ‘dream’ where it looks through its memory files (text files with the memories it wrote to itself) and consolidates them to create a more compact and useful record for it to reference in the future. Stuff like that is a lot cheaper to develop than training a new LLM.
It’s all pretty fascinating to me, but I think I need to keep in mind that it probably won’t do a lot to make the world a better place.
This is fascinating! I haven’t had an unbiased “insider” opinion on this topic, and it’s really interesting.
The elephant in the room is of course your last line!
In my opinion, it’s a fundamentally excellent technology with obvious long term uses in a world of aging population. I was looking at videos of Boston Dynamics new Atlas robot, and the parallel is obvious.
This question will define the century. Will humanity be better off with AI and robots in every workplace and throughout government, medicine, the military, etc?