Discussion about this post

User's avatar
Neural Foundry's avatar

That productivity scaling paper is intresting but the 8% annual improvement feels almost too conservative given whats happening with model capabilities right now. I ran some tests internally comparing GPT-4 vs Claude 3.5 on analyst tasks, and the diffrence in speed was more like 25-30% depending on context switching overhead. The bigger question is whether those productivity gains actually translate to output quality or just faster mediocrity, which is the thing most studies dunno how to measure yet.

No posts

Ready for more?