They don’t really understand how they work and get misled by how AI can get to a correct solution (or correct looking one).
Like I was in a meeting where people were presenting their Claude skills (which are just text files describing processes that it can add to the context) and one manager mentioned doing regression testing on added skills to make sure they don’t break the functionality of existing ones. From my pov, he was both on the right track but also missing the point entirely because they won’t be able to consistently pass regression tests even without new skills. Because something being in the context window only has a chance of affecting the output. If the code being modified has comments that look like instructions, they might override the actual instructions.
Or it might try solving non-existent problems for you. Like a skill I was “developing” for making a particular modification to tests basically just outright said “make a test that inherits from the target test and add these parameters”. Dead simple step. First test I use to test it on, I see it’s missing one of the arguments. I mention it and the AI says that because of the start of the name being “<name of section>” and the test didn’t target that section, it decided that the argument wasn’t necessary, so I had to add instructions to not just add that argument but to not decdide to just leave it out for arbitrary reasons.
I can’t say for sure any of the AI tasks I’ve done saved any time by being AI. But the mental load is lower and they really want us using AI, so I’ll keep doing it, but the unreliability is going to cause more problems than it solves in the long run IMO.
They don’t really understand how they work and get misled by how AI can get to a correct solution (or correct looking one).
Like I was in a meeting where people were presenting their Claude skills (which are just text files describing processes that it can add to the context) and one manager mentioned doing regression testing on added skills to make sure they don’t break the functionality of existing ones. From my pov, he was both on the right track but also missing the point entirely because they won’t be able to consistently pass regression tests even without new skills. Because something being in the context window only has a chance of affecting the output. If the code being modified has comments that look like instructions, they might override the actual instructions.
Or it might try solving non-existent problems for you. Like a skill I was “developing” for making a particular modification to tests basically just outright said “make a test that inherits from the target test and add these parameters”. Dead simple step. First test I use to test it on, I see it’s missing one of the arguments. I mention it and the AI says that because of the start of the name being “<name of section>” and the test didn’t target that section, it decided that the argument wasn’t necessary, so I had to add instructions to not just add that argument but to not decdide to just leave it out for arbitrary reasons.
I can’t say for sure any of the AI tasks I’ve done saved any time by being AI. But the mental load is lower and they really want us using AI, so I’ll keep doing it, but the unreliability is going to cause more problems than it solves in the long run IMO.