So many engineering teams are still missing the basics when it comes to releasing these chatbots into the wild. 1. Evaluations pre-release 2. Monitoring after-release 3. LLM security (against prompt injections) 4. AI insurance so your company is still protected in case something goes wrong
So many engineering teams are still missing the basics when it comes to releasing these chatbots into the wild. 1. Evaluations pre-release 2. Monitoring after-release 3. LLM security (against prompt injections) 4. AI insurance so your company is still protected in case something goes wrong
@gaborsoter Is insurance really an answer here? How do companies now protect against a rogue employee who just writes random stuff? Pretty sure it’s not insurance and instead just backtracking?