Announcement

Collapse
No announcement yet.

Google Opens Patches For "METRICFS" That They Have Used Since 2012 For Telemetry Data

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by schmidtbag View Post
    Misread by what, who, and how? I only asked for 1 scenario - shouldn't be that hard to come up with something more specific.
    Anybody that Google sells your info to. Could be an insurance agency, could be the police. Hard to come up with specifics since they're all done behind closed doors due to all the nefarious potential.

    Yes, I get that errors can happen, but what of Google's services are going to cause me a significant issue? Worst-case scenario, I get flagged for something I didn't do on Youtube.
    What if someone is looking at child porn and some bug Google's database software puts all the child porn results, clicks, metrics, whatever into your profile.

    Objectively, a computer is, without a doubt, more trustworthy than a person. A computer has no incentives. A computer has an extremely slim chance of making an irreproducible mistake. Yes, software is only as reliable/trustworthy as the programmers made it out to be but no matter how flawed the programmers are, they can make the software better than themselves.
    I'm not aware of any AI of Google's that I depend on.
    A computer is only as trustworthy as the person who set the computer up and the people who wrote the code. But we're on Linux so we know that.

    Credit card transaction histories don't tell you what you bought, they just tell you what store you bought from, how much, and when. But for argument's sake, let's say it did track what you bought (or, let's say the store itself was giving away your purchase info). Ever heard of using cash? You might then argue "what if it's an online purchase?" but if you're really that concerned about someone tracking your purchase history, buy a gift card, create another account, and then ship it somewhere that's not your home. Personally, I don't bother with any of that because I don't care if someone sees my purchase history either. I don't buy things I'm ashamed or afraid to have other people know about. That doesn't mean I am willingly going to give away all my info, but if someone starts collecting it, well, I guess they're going to be pretty bored.
    You can't use most gift cards for online purchases if you're that paranoid about being tracked. You'll be flagged by the activation timestamp, they'll know what store that happened, and they'll have your picture from the closed circuit camera system in the store. Any competent investigator can figure that out and likely won't even need a warrant to do that since most stores and credit card companies willingly hand over any and all information.
    Last edited by skeevy420; 06 August 2020, 07:33 PM.

    Comment


    • #22
      Originally posted by skeevy420 View Post
      Anybody that Google sells your info to. Could be an insurance agency, could be the police. Hard to come up with specifics since they're all done behind closed doors due to all the nefarious potential.
      But again: they don't have anything to sell that would cause such things. I'm no saint but it's really not hard to live an enjoyable life without crime. If I'm accused of something I didn't do, that's something I can sue over.
      What if someone is looking at child porn and some bug Google's database software puts all the child porn results, clicks, metrics, whatever into your profile.
      First of all, what is the probability of that ever happening? It's not a 0% chance but I might as well never use the internet again if that's something I have a reason to fear. But even if that did happen, millions of people do terrible or illegal things online every day - a single instance of me being wrongly documented as a pedophile probably isn't going to trigger anything. If it does, police will probably raid my house, computer, and ISP logs, only to find that no actually, I didn't search such things.
      Should something like an insurance agency deny me because they bought this info, fine - I don't want to work with a company that deliberately finds excuses to discriminate.
      A computer is only as trustworthy as the person who set the computer up and the people who wrote the code. But we're on Linux so we know that.
      Yes, and I for one do trust what Google is doing with the data I collect because I don't fantasize that my life is important enough that an evil corporation gets any satisfaction in ruining it. This is no different than people looking at a robot from Boston Dynamics and say "so this is the beginning of Skynet" with full sincerity (it's not - the robot uprising isn't going to happen).
      The only thing Google wants is money. They won't make money if they sell data so incriminating that it ruins people's lives, because then people will fear them. This is what happened with Facebook - they've been losing users because people are getting suspicious. Like a parasite, they can't suck all of the blood out of the host or else they kill the host. They have to allow the host to remain "healthy" enough so they both benefit.
      You can't use most gift cards for online purchases if you're that paranoid about being tracked. You'll be flagged by the activation timestamp, they'll know what store that happened, and they'll have your picture from the closed circuit camera system in the store. Any competent investigator can figure that out and likely won't even need a warrant to do that since most stores and credit card companies willingly hand over any and all information.
      Fair points - shows how little I think about such things, because I don't have a reason to worry about them.

      To me, living a life in fear is not a life worth living. That doesn't mean to be irresponsible (like I said, all the important info about me is stored on my personal server) but I just can't see why people care about what their google searches will do about their lives, as though someone is actively watching what you are doing and toys with your life, simply because they like to watch the world burn. There's enough real crap I have to deal with in life. Until someone actually proves to me that Google is ruining people's lives regularly, it is not worth my concern.

      Comment


      • #23
        Originally posted by schmidtbag View Post
        Misread by what, who, and how? I only asked for 1 scenario - shouldn't be that hard to come up with something more specific.
        I already provided examples of data-analysis systems failing.

        Objectively, a computer is, without a doubt, more trustworthy than a person. A computer has no incentives. A computer has an extremely slim chance of making an irreproducible mistake.
        AHAHAHAAHAHAHAHAHAHAHAAHAHAHAHAHAHAHAHAHAH you are a fool.

        A trustworthy system is a safety-certified system controlling vehicles and aircraft, and PLC controllers used in industrial automation (also certified for reliability to varous levels). To get such certification the software and ahrdware has to follow specific rules and must be inspected and tested by third parties that then certify it and put their ass on the line.

        On PC there is no such certification nor guarantee, if the software or hardware fails none gives a shit, at most you get replacement hardware. And the complexity of most software is staggering.
        Not even MS or the software vendor knows about some kind of errors or failures, and even if you have a PAID support contract they can answer you to work around it in some cases.

        Credit card transaction histories don't tell you what you bought, they just tell you what store you bought from, how much, and when.
        and they can be cross-referenced with other data to figure out what you bought or your location or patterns.
        Did you ever wonder why Google, Facebook and others have trackers on EVERYTHING? Because they want data they can cross-reference to actually get useful info out of it.

        Personally, I don't bother with any of that because I don't care if someone sees my purchase history either. I don't buy things I'm ashamed or afraid to have other people know about. That doesn't mean I am willingly going to give away all my info, but if someone starts collecting it, well, I guess they're going to be pretty bored.
        Again, the issue isn't if they know what boring stuff you do in your boring life, but what can happen if sketchy data-aggregation system makes a mistake and you get flagged as "potential terrorist" or end up in some blacklist? Or some criminal decides you are a worty target (like it happens for latin america or asia, they are using data mining)

        Uh... yes it has. There have been multiple occasions of lawsuits because of this.
        And they ended in what way? Usually it's a slap on the wrist, where they pay like 1/10th of what they made by abusing such information.

        Comment


        • #24
          Originally posted by starshipeleven View Post
          I already provided examples of data-analysis systems failing.
          But you didn't provide examples of how Google's data collection failing would affect me. There's a difference between being marked as a terrorist vs a search for cat videos being recorded as a search for bat videos.
          AHAHAHAAHAHAHAHAHAHAHAAHAHAHAHAHAHAHAHAHAH you are a fool.
          Usually, the ones who respond the most obnoxiously are the ones who have the hardest time to defend themselves.
          A trustworthy system is a safety-certified system controlling vehicles and aircraft, and PLC controllers used in industrial automation (also certified for reliability to varous levels). To get such certification the software and ahrdware has to follow specific rules and must be inspected and tested by third parties that then certify it and put their ass on the line.
          And you think that doesn't apply to one of the largest corporations in the world? Just like Boeing can't afford an aircraft crashing because of this mistake (seriously, their recent glitches are hurting their reputation badly), Google can't afford sloppy code that handles data incorrectly, because their entire business model depends on it.
          On PC there is no such certification nor guarantee, if the software or hardware fails none gives a shit, at most you get replacement hardware. And the complexity of most software is staggering.
          Not even MS or the software vendor knows about some kind of errors or failures, and even if you have a PAID support contract they can answer you to work around it in some cases.
          Yes, just like there's no guarantee I won't get struck by lightning in my lifetime, or that I'll win the lottery. The probability of a life-changing error happening in this context is negligible. It's stupid to have to argue this. Worrying about statistical anomalies is a waste of time.
          Did you ever wonder why Google, Facebook and others have trackers on EVERYTHING? Because they want data they can cross-reference to actually get useful info out of it.
          Yes, and that tidbit works against your point: the more data you have, the more accurate and reliable it is. So in the event of a software/computational error that results in you being flagged for something, there is so much tracked activity of everything you do and everywhere you go that it's statistically insignificant and can be dismissed. It will raise attention when there is a pattern to follow. So ironically, the less data you give them, the more likely an error will be a threat (unless of course you give them no data at all, but good luck with that).
          In the AI example you gave, it was bad at identifying black people because there wasn't enough data, and therefore it made mistakes.
          Again, the issue isn't if they know what boring stuff you do in your boring life, but what can happen if sketchy data-aggregation system makes a mistake and you get flagged as "potential terrorist" or end up in some blacklist? Or some criminal decides you are a worty target (like it happens for latin america or asia, they are using data mining)
          I highly doubt that today there isn't some form of integrity checking before going ahead and taking action on someone's life because of an alarming data point here and there. Google is known for making pretty effective algorithms, so it's good at figuring out patterns. A single misfit data point in a sea of millions is not going to ruin your life. If you really think you're special enough to think otherwise, go ahead - I'm not interested in changing your opinion. I'm fine with taking my chances. I'm sure you could perform 1 search of something explicitly very illegal and the FBI isn't going to come knocking on your door as a result of it. You performing that search isn't an error, but the system has to treat it like one because otherwise the amount of false-positives of active threats would be overwhelming. You'd likely just be flagged as suspicious until further activity happens.
          Suppose a criminal does collect my search data: then what? What are they going to do with it? I'm sure the data mining you're referring to is companies that sell personal information like government IDs, credit card numbers, and addresses.
          And they ended in what way? Usually it's a slap on the wrist, where they pay like 1/10th of what they made by abusing such information.
          No, actually. In the case of Cambridge Analytica, they're now defunct.

          Comment


          • #25
            Originally posted by schmidtbag View Post
            And you think that doesn't apply to one of the largest corporations in the world?
            They can fail much more without getting hit with huge costs, and in many cases they don't need 100% reliability as it's much more expensive than just paying off some lawsuits every now and then (which is also why things like cars and houses aren't built like tanks and bunkers and any damage is covered by insurance, it's cheaper to pay an insurance than a tank-car or a bunker-house)

            A mistake in aircraft control software (or other design flaws) means all aircraft of the same model are grounded until it's fixed and the builder is paying damages to the companies that can't fly their planes. This is not cheap, and aircraft companies are not "poor customers" they have lawyers and will join together to beat them down if they don't pay.

            It happened a couple times this year or the last. Boeing I think.

            A mistake in a Siemens safety PLC (the device, not the software written by the system integrators that designed and built the plant) that blows up some high end factory and/or kills people will make them responsible, and they will have to pay for damages.

            Yes, just like there's no guarantee I won't get struck by lightning in my lifetime, or that I'll win the lottery.
            Those are the chances of safety-certified systems malfunction (it's closer to winning the lottery than getting struck by lightning). How many times an aircraft electronics malfunctioned so badly that it caused a crash? How many times a PC malfunctioned so badly that it crashes or kernel panics or BSODs? I can reliably trigger kernel panics or BSODs on all modern OSes, so it's not like it's hard.

            And that's the OS, which ALLEGEDLY was made by professionals. Actual software running in it is even funnier. Stuff coded by underpaid college grads isn't anywhere near these low chances. Most opensource stuff is right out the fucking window as it always waives any responsibility in the very license.

            Which is why vehicle control and industrial control systems are NOT running on PC hardware and with PC operating systems but are using some kind of powerful microcontroller running VXWorks or other OSes that are certified to the right level of reliabiltiy/safety.

            The data collection and analysis or media center is run by a PC-like system, but the actual process is controlled by a reliability-certified or safety-certified controller.

            Yes, and that tidbit works against your point: the more data you have, the more accurate and reliable it is.
            That is a good reason to not give them enough.

            So ironically, the less data you give them, the more likely an error will be a threat
            With so much less data there is much less certainty of any answer, so it's more likely to be discarded as "unsure" or "incomplete". You even Big Data, bro?

            In the AI example you gave, it was bad at identifying black people because there wasn't enough data, and therefore it made mistakes.
            It had the same amount of data for both white and black people. The "data" is the image it needs to recognize. The photos were of the same size and quality. The only difference was the subject.

            The mistakes were due to how the AI was trained, and that's a human error just as a bug is human error in a non-AI software, AIs don't choose their training data, their human masters do. Creating an AI is part programming for the core, and part training. You screw up either and it's garbage.

            I highly doubt that today there isn't some form of integrity checking before going ahead and taking action on someone's life because of an alarming data point here and there. Google is known for making pretty effective algorithms, so it's good at figuring out patterns.
            No. Please stop with your ignorance.

            Youtube is one of the biggest examples of algorithms doing stuff unchecked, and copyright infringment for 5 seconds of "copyrighted material" are common, channels are outright banned without explanation.

            The algorithm checks if the video contains keywords and will demonetize or delist or "not recommend" or "forget to notify subscribed users", this has happened for Coronavirus, anyone saying "coronavirus" or just "corona" got insta-demonetized and everyone had to start calling it "human malware" or with other names so that the AI isn't triggered.

            This caused shitstorms in the past also because of other reasons like "videos aimed at children" because of US children exploitation laws or something, and if the AI decides that your videos are "aimed at children" you are demonetized and fucked.

            Plus all the crap that goes on when a channel is compromised and stolen. It literally takes a couple weeks of combined efforts of all fans, and other channels in the same category also ask their fans to pummel Youtube with reports and messages to get Google's attention and "save their friend".

            This is Google, so they don't give 2 shits about anyone, even large youtubers with multiple millions of subscribers, that are legitimate businnesses with multiple employees creating media are treated like complete shit and left in the dark, they can appeal to copyright strikes but it's mostly arbitrated by machines that don't really listen, and even those that have a youtube "channel manager" which is a human cannot ask him to do much about any of this.

            Why? Because even big youtubers aren't worth shit for Google. They can lose thousands and nothing will change.

            This is completely insane China-grade shit that is going on, from Google, and you have the audacity of saying that "oh I highly doubt it", no motherfucker you don't know shit about how businnesses deal with people and use data.

            Suppose a criminal does collect my search data: then what? What are they going to do with it? I'm sure the data mining you're referring to is companies that sell personal information like government IDs, credit card numbers, and addresses.
            I'm referring to the fact that I don't know how much they actually track and log of my activity online and I don't see why I should trust them to "not track me too much" or "store my data correctly" so I block all analytics and tracking regardless.

            Google is doing SO MUCH MORE than just collecting search data, all sites have a "google analytics" service running.

            DuckDuckGo lives only off collecting anonymous search data (and showing ads in their search results) and it's a tiny company.

            No, actually. In the case of Cambridge Analytica, they're now defunct.
            You mean "none went to jail, the company was bankrupted to avoid paying shit and now everyone (that matters like the CEO and CTO) are CEOs and CTOs of another company"? Because that's what happened. The "new" company is called Emerdata, and is still doing the exact same job, FYI.

            Is this the "justice" you are talking about? The "justice" that will protect you in case something goes wrong?
            Last edited by starshipeleven; 07 August 2020, 03:22 PM.

            Comment


            • #26
              Originally posted by starshipeleven View Post
              They can fail much more without getting hit with huge costs, and in many cases they don't need 100% reliability as it's much more expensive than just paying off some lawsuits every now and then (which is also why things like cars and houses aren't built like tanks and bunkers and any damage is covered by insurance, it's cheaper to pay an insurance than a tank-car or a bunker-house)
              Just as they don't have huge costs, they don't have huge revenue per-"product" either.
              A mistake in aircraft control software (or other design flaws) means all aircraft of the same model are grounded until it's fixed and the builder is paying damages to the companies that can't fly their planes. This is not cheap, and aircraft companies are not "poor customers" they have lawyers and will join together to beat them down if they don't pay.
              A human-made mistake in data collection means all data is compromised. This is not cheap either. Google can't afford such a mistake. They can afford machine-caused mistakes (like from a flipped bit or whatever).
              Those are the chances of safety-certified systems malfunction (it's closer to winning the lottery than getting struck by lightning). How many times an aircraft electronics malfunctioned so badly that it caused a crash? How many times a PC malfunctioned so badly that it crashes or kernel panics or BSODs? I can reliably trigger kernel panics or BSODs on all modern OSes, so it's not like it's hard.
              I completely agree. Meanwhile, an issue with Google's data is not only less threatening but statistically insignificant. Then, you have to consider the incredibly slim chance that if there's an issue, it's actually a threat. THEN you have to consider the unlikely chance that the threat will actually escalate to a real-world action. So if I can hop on an aircraft and probably not die, why the hell should I have the slightest concern that an error in Google's servers will ruin my life, especially considering I don't do anything incriminating? It's utterly absurd.
              That is a good reason to not give them enough.
              Oh, yeah, you're right!
              /s
              With so much less data there is much less certainty of any answer, so it's more likely to be discarded as "unsure" or "incomplete". You even Big Data, bro?
              That situation is only a problem with a few hundred, maybe a few thousand data points. Most people have a lot more than that. It ain't "Big Data" if there isn't a big amount of data. It's not like you as a user can delete what they collect.
              The mistakes were due to how the AI was trained, and that's a human error just as a bug is human error in a non-AI software, AIs don't choose their training data, their human masters do. Creating an AI is part programming for the core, and part training. You screw up either and it's garbage.
              Yes... because it wasn't collecting enough data in the manner it should have, hence my point. More clean data would make it more reliable.
              No. Please stop with your ignorance.
              Not until you take off your tinfoil hat.
              Youtube is one of the biggest examples of algorithms doing stuff unchecked, and copyright infringment for 5 seconds of "copyrighted material" are common, channels are outright banned without explanation.
              It's hyper-sensitive, but it's doing what it was programmed to do. It's not making a mistake, it's just pushy. Keep in mind, it wasn't always this bad - advertisement and media companies are the ones that told Google to crack down on this because they didn't want their product "tarnished" or stolen.
              As a consumer of YT, there's hardly any danger.
              What would be alarming is if the AI starts flagging things that have no reason whatsoever to be flagged, and so far, I've never heard of that happening.
              Plus all the crap that goes on when a channel is compromised and stolen. It literally takes a couple weeks of combined efforts of all fans, and other channels in the same category also ask their fans to pummel Youtube with reports and messages to get Google's attention and "save their friend".
              That's usually because of human-error (like a crappy password), so, not relevant to the discussion.
              This is Google, so they don't give 2 shits about anyone, even large youtubers with multiple millions of subscribers, that are legitimate businnesses with multiple employees creating media are treated like complete shit and left in the dark, they can appeal to copyright strikes but it's mostly arbitrated by machines that don't really listen, and even those that have a youtube "channel manager" which is a human cannot ask him to do much about any of this.
              EXACTLY!!!! They don't care about you or anyone; they just want their money. In the case of YT, that means bowing down to the whims of advertisers and media companies. The average youtuber is a thorn in their side, because having too much freedom on the platform takes away from Google's revenue. That's why Google puts so much time and money into their "Youtube Originals", because those people don't cause them financial headaches.
              This is completely insane China-grade shit that is going on, from Google, and you have the audacity of saying that "oh I highly doubt it", no motherfucker you don't know shit about how businnesses deal with people and use data.
              No, it really isn't like that, and it's kinda pathetic you actually think it's comparable. Wake up and smell the roses. The world is a shitty place but I'm not buying into your conspiracies.
              I'm referring to the fact that I don't know how much they actually track and log of my activity online and I don't see why I should trust them to "not track me too much" or "store my data correctly" so I block all analytics and tracking regardless.
              Damn you are so paranoid... Even if they track and sell every single data point that you provide them, it only matters if YOU give them something to work with.
              Google is doing SO MUCH MORE than just collecting search data, all sites have a "google analytics" service running.
              I'm well aware - I've set up GA accounts for several websites. But, Google doesn't get that much data from it. They don't know who you are, they just know where in a site someone is browsing, where that person is from, how long they visit pages, and so on. They don't collect account info, unless the site is tied in with Google to do so. It's practically a wholly separate system.
              DuckDuckGo lives only off collecting anonymous search data (and showing ads in their search results) and it's a tiny company.
              Yup, and water is wet.
              You mean "none went to jail, the company was bankrupted to avoid paying shit and now everyone (that matters like the CEO and CTO) are CEOs and CTOs of another company"? Because that's what happened. The "new" company is called Emerdata, FYI.
              It put an end to what they were doing. It's unlikely Emerdata will make the same mistake.
              Last edited by schmidtbag; 07 August 2020, 05:19 PM.

              Comment

              Working...
              X