What keeps you awake when thinking about AI for work?

AI tools and systems are popping up everywhere. If you’re just starting to try them out or have done a small trial, what would help you feel confident about using it fully?

It feels a bit like when we moved from local servers to cloud systems 15 years ago. Back then, people were asking ‘Should we?’ but over time it became ‘When will we?’ I think AI is now moving from ‘Should we?’ to ‘When?’

This forum has some rules for posts like this:

  • Posts need to have enough detail, at least 100 characters. The more you share, the better.
  • If you’re linking to something technical, include a direct link and some context.
  • Share how you’re connected to the info. Did you create it? Or just find it useful?
  • If there are resources like code, models, or training data, add them too.

Feel free to ask the mods if you need help. Thanks!

This question has a lot of angles. It’s about both the tech side and the business side. AI isn’t magic; it should support your main tools or processes. Use it where it truly fits and can help.

If a problem can’t be solved well with regular programming but AI can improve it, then it makes sense to use it. But if you’re adding AI just because it’s trendy, it will feel wrong and forced.

With enough knowledge of your field and engineering, deciding where to use AI should be clear. Large language models, for example, are good for specific things, not everything. Focus on your main goals and only use AI where it adds real value.

The biggest frustration is not being able to use these tools because our security team keeps saying no. We can’t integrate them with our systems or use them the way they’re meant to be used. By the time we get approval, something newer and better is already out. It’s like we’re always behind.

@Ori
What are the main reasons security says no?

Hart said:
@Ori
What are the main reasons security says no?

Anything involving a work computer, uploading data, or downloading data needs tons of approvals. They also have to check the company providing the tool, its security standards, how they handle data, and if they’re trustworthy.

Our security team doesn’t want data leaving our network or being used in training models. It’s even worse if the data could be shared.

Then there’s dealing with contracts, service agreements, and all that. It’s already a headache to get normal tools approved, so something as complex as AI tools becomes a massive challenge.

What about all the people whose jobs will be lost because of AI?

Just kidding. Companies will just say ‘It’s all for efficiency.’

The fact that we still can’t fully explain how some AI decisions are made. It’s hard to trust something we don’t fully understand.