Big tech is coming for your data, now even more than before

Last week Adobe updated their terms of service. In short; the new terms gave Adobe a license to the users content, to use it for a wide range of purposes mainly benefitting Adobe. I’m not going to reflect on this here. If you want to know more about this, read the discussion I saw on X.

So last week alone, there was not only Adobe. There also was Facebook Meta announcing that they will use your images to train their AI (and announced they paused it in EU after some commotion). Apple announced a partnership with ChatGPT to integrate it with SIri. All accessing your data. Not small amounts, but all of it.

Here’s the thing… The time when you were the product while using free stuff, lays far behind us. These days you pay and you’re still the product. Every company wants your data. Yes, there is GDPR but it won’t help you that much.

This mining and using your data (also read violating your rights to privacy and such) is not new but today with the rise of AI it’s going places big time. And this is worrisome.

Why? Well for starters lots of people have some misguided trust in data. Either due to lack of knowledge and/or lack of experience.

From my professional perspective I ‘should’ applaud it. Because in this way we supposedly get to know the customer even better and we can offer better tailored products and services.

But this is only partly true. The foundation underneath it all should always be social human interaction. And I see this aspect of life disappearing rapidly. We are not becoming better people.

We increasingly rely on AI, data and algorithms. We replace humans with machines. But accountability and empathy fall far behind.

Algorithms can be, and often are flawed. Rubbish in = rubbish out. Also data stripped from relevant context can lead to disastrous conclusions and result in inhumane actions. Like for example the dutch childcare benefits schandal.

(please check the link to read the story).

But hey, if the computer says so…you may end up taking your own life. And who cares enough to take responsibility and to be held accountable?

(please check the link to read the story).

Users of AI ​​trust way too often on its output Long term, what does this mean for society? When AI produces unreliable outcomes but people mistake it for valid information, in this days and age where fake news is already an issue?

Sure, AI can do some really cool fun stuff to. But that’s always the carrot we’re chasing. It’s time to choose how you want to handle your data and who gets to have it. Read the terms of an app for a change and don’t just install everything on your phone or laptop because it’s fun or handy.