I’m asking for public policy ideas here. A lot of countries are enacting age verification now. But of course this is a privacy nightmare and is ripe for abuse. At the same time though, I also understand why people are concerned with how kids are using social media. These products are designed to be addictive and are known to cause body image issues and so forth. So what’s the middle ground? How can we protect kids from the harms of social media in a way that respects everyone’s privacy?

  • Æ@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago

    I think we should reframe the question.

    How can we protect adults from the harms of not being able to post meaningless bullshit anonymously to online anonymous strangers we never agree with without sacrificing everyones children’s mental stability?

    Maybe put childrens rights before adult rights. Adults had fun and got along fine without social media back before the 2000’s. I refuse to believe that we are no longer capable of that. Especially if it means kids get to to go back to using the internet as a resource for homework and playing outside and using their own imaginations. Adults too.

  • shaggyb@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    4 days ago

    Stop. Giving. Them. Phones.

    Stop whining. No they don’t need one. NO THEY DON’T.

    No.

    No they’re not special.

    No they’re not too busy. Neither are you.

    No iPad either.

    Stop. Shut up. No. Phones.

    • hector@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      23 hours ago

      And or old school phones, that can call and text, but not surf the internet. Old smaller flip phones. Because parents will want to be able to communicate because they are worriers in many cases, there is no need for them to use smartphones for this.

    • ErevanDB@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      I agree, if you limit “phones” to “smart phones and portable computers”. There are reasons to give a kid a small, no internet dumbphone. But yes, don’t give kids unrestricted access to the family PC, and DEFINITELY dont give them their own.

    • YeahIgotskills2@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      That’s the tack I’m taking. My eldest goes to high school next year and most of his peers are automatically getting a smartphone at that point. He’ll be 13. He can forget it. A dumb phone at a push, for safety. That’s it.

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 days ago

    ban social media metrics and information trading/markets. make it a truly anonymous service like it was in the early 2000s.

    if protecting children was the point they would stop corporations from identifying all users and selling their identities/profiles online.

    but, protecting the children is NOT the point. the point is control of freedom of speech, or rather who gets to have the freedom of speech.

  • FlyingSpaceCow@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    Governments need to setup a digital ID using a trustless authenticator.

    Government issues a one-time verified credential (tied to real identity verification, like a passport or SSN check). You get a cryptographic token on your device. When a platform needs to know “is this a real adult citizen?”, you present a zero-knowledge proof — yes/no, nothing else. No name, no IP, no persistent identifier the platform can track. The government isn’t contacted. The platform learns nothing except the answer to their question.

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    The German passport allows services to verify age through you NFC reading your passport on your phone and confirmation of validity through intermediates state service. All they see is a confirmation of age requirement met. No name, no age, no address, no face.

    Some other countries have similar systems. It’s already a EU directive to be implemented on a broader European level.

    • ageedizzle@piefed.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      This sounds like a much better strategy than the Australian model of simply scanning your face and using AI to guess your age

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      4 days ago

      How would that work online? How would they confirm it’s your passport, and that it’s a real passport that was really scanned (instead of a browser plugin)?

      • Kissaki@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago
        1. Register as a service, with justification why you need to be able to read the fields or properties you say you need
        2. Upon acceptance, aquire a digital permission certificate
        3. Set up a server, that handles communication with the ID
        4. For a request, prove you own the permission cert through a challenge sent by the ID document
        5. ID document proves through a challenge to the server that it is what it is (a set of produced ID documents use the same private and public keys so they are not personally identifiable / associatable to an individual)
        6. User enters PIN so that this process can proceed
        7. Open secured connection between server and ID document
        8. Server can request/challenge age verification, and the ID document answers with “is met”

        At least the Wikipedia page is not detailed/technical on step 8, but if you were to attempt to man-in-the-middle, you could not because you can’t fake identifying as a valid ID document, which is ensured by the challenge and private/public key cryptography.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          I’ll need to look into it a bit more, but I’m skeptical that this will work in practice:

          How can they confirm that I’m the owner of the passport? How do you prevent them from selling the fields they requested, that have been uniquely linked to you? How do you prevent the government from keeping track of all the services you’re using?

          • Kissaki@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            The first factor is you physical passport, the second factor is your pin.

            I don’t see how an age verification could prevent selling verified age. Once they acquire data they could theoretically sell it, illegally, if they ignore law.

            The point is, you can share a small subset of fields without others. No need to share your face or passport number.

            I’m not sure about whether the authority knows about the request and response at all. I previously thought so, but this description did not mention it, and it doesn’t seem technically required, if both sides can verify public key/cert validity independently, and then communicate with each other.

  • ameancow@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    You can’t, however you frame this issue there’s going to be a sacrifice. We have to all digest this.

    The best kind of sacrifice you can make though for the best outcome is to limit your child’s screen-time, AND ALSO YOUR OWN. Spend more time together, practice what you preach, you are also a child being harmed by social media.

  • Just normalize talking about those online irl abuse/exploitation stuff instead of yelling at em nor grounding. And stop victim blaming even some of the professionals do that.

    Maybe we should do normalize about talking about other stuff too, to body images in head including “problematic” ones to in some anormal/atypical attraction types to possible self diagnosed but not so loud neurodiversities such as realizing you are might be plural or have too specific kinds of ocd.

    Ive seen many abusers online are aiming kiddies online with those stuff and since there are not much help and many stigma surrounding mental health and bs kind of therapists that does victim blaming, they will have either to go online with predators watching em and prey on them for those vulnerabilities thrn thus preds will shift blame to those kids or smth.

    Ive seen kids young as 12 or smth in some high risk mental health communities. You can tell someone did not wanted em but predators def do. Basically do not give birth to kids if you cant accept em in any way, if you think your kid becoming dangerous after some time, methinks you are also responsible for some aspects of it if they are under some of age.

  • KingOfTheCouch@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    I like to think I’m a tech savvy parent and the amount of tooth gnashing to setup and maintain child accounts is incredible. I’m convinced the foxes guarding the henhouse are using dark patterns to make parents give up.

    Why can’t I just get a notification on my phone saying “Hey, kiddo wants to have screen time. Approve?”

    Hell, I’d love a notification saying “Kiddo started watching Mr. Blah.” If I got the notification and I didn’t want them watching that, I could block the video, or creator with a click. WHY ARE WE NOT AT THIS LEVEL OF CONVENIENCE?

    A LOT of these concerns would go away if phones/tablets/tv’s had these simple controls. Move those privacy controls into the home and MAKE them so easy a neanderthal could operate them.

    If I have to *.newsocialbook.com into my router, you can bet your damn ass that “LiveLaughLoveMom<3” is going to keep demanding that someone else do it for her.

    • LePoisson@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      Capitalism. Everything you described costs money to create and maintain and it generates zero (or negative) profit. Most people aren’t going to want to pay for some sort of nanny toolkit.

      Don’t get me wrong, I agree with you and it should be like that. Our current systems are not going to bring that about though.

  • DFX4509B@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 days ago

    Parental controls have been an effective way for decades. In combination with actually looking over your kids, of course.

  • lemmy_outta_here@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    Kill the engagement algorithm. Your feed should contain a chronological list of posts made by people you subscribe to. In one stroke you could end the doomscroll - not just for kids, but for everybody. Also, infinite scrolling should be banned.

    • madnificent@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      5 days ago

      I’ll reply to this random one with that statement. There’s no winning move as a parent.

      Problem is being locked out. If your kid is the only one not on social media and all other kids are, your kid will be socially left out.

      All kids are on a chat platform you don’t support. What do you? Disallow it and give them a social handicap that might scar them, or allow it and take the risk?

      The same goes for allowing images on other platforms. Since GDPR schools seem to care. Yet if it’s a recording that will be put on social media you can explain your 4 year old why they weren’t allowed to participate… It sucks.

      I don’t know what the right way forward is. I don’t think this is it. Something is needed though. We should at least signal what we find acceptable as a society. Bog stupid rules which are trivial to circumvent might be good enough, or perhaps some add campaigns like we did with smoking (hehe, if it’s for something we support then adds are good?).

      Regardless, the current situation clearly doesn’t work. It would be great if we could find and promote the least invasive solutions.

      • frostedtrailblazer@lemmy.zip
        link
        fedilink
        arrow-up
        0
        ·
        5 days ago

        I feel that communicating your concerns with other parents and their school can help. I feel it can make sense to have some forms of socialization when they are in middle school or high school, but even then you’d want a pretty locked down system, imo.

        I feel that not every parent is going to let their kids use technologically to talk to their friends, especially not all the time. That’s not how I grew up and I was fine developmentally speaking. As a parent you can seek out other parents that live by similar philosophy locally for your kids to have as friends as well.

        • undeffeined@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          5 days ago

          You’d be surprised with what parents let their kids do. My little anecdotal sample size contains mostly highly educated people but most of them don’t place any restrictions on screen time of their kids. They claim they talked to their kids and they have assured them they don’t look at anything they are not supposed to but that’s just not what happens in reality.

          What really happens is that the kids with no restrictions will engage with all the predatory bullshit on these platforms, nonstop. I can see this with my own eyes and my kid brings their friends over.

          Communication is key but unfortunately the business model of these platforms is based on addiction and children are not equiped to deal with it and parental controls are an essential component.

          • madnificent@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            5 days ago

            I believe the parent post is nicely sketching out what a “best” move is. I have seen no better approach myself. At the same time I see what you see. The best approach isn’t all that great. If you’re lucky and find the right people it could work. There’s a lot of luck involved there.

            That’s why I do think there should be some regulations indicating what is tolerated. It seems to me parent poster may agree (and thus also woth your take).

            Since GDPR you can tell the school you don’t want pictures on platforms you disagree with. You may miss out on seeing the photo’s, you might come across as crazy, but you can (and you should). We were given a choice at the cost of extra paperwork and some limitations.

            Even without the addiction problem of these platforms we should nurture and find a good society around us. It’s a valid take to try and find likeminded people.

            I don’t think that’s the end of it. Given the state we’re in, the network effect, and the fragile ego of developing kids, I suppose we need a stronger push.

            AI enforced age verification or logins which allow you to be followed anywhere is not the solution in my current opinion, it’s a different problem. The problems are the addictive and steering nature of the platforms which seems to be hard to describe in a clear way legally.

            I wonder how “these platforms” should be defined and what minimum set of limitations would give us and the children the necessary breathing space.

            • flamingleg@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              4 days ago

              the minimum would be transparency for the algorithm. If users can see exactly what a social media algorithm is doing with their content feed, they would always have a way to identify and escape dark patterns of addiction.

              But this minimum itself would require powers to compel tech companies to give up what they would describe as intellectual property. Which would probably require a digital bill of rights?

              The most practical option would be to just ask your kids directly about the kinds of content they’ve been consuming and why. Dinner table conversations can probably reveal those dark patterns just as well

  • ChristerMLB@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Some of it can be accomplished by just setting universal demands for how social media works for all users:

    • ban targeted advertising
    • make it mandatory for companies to ensure algorithms don’t prioritize posts for making users angry, scared or depressed

    Stuff like that. These kinds of regulations don’t involve ID checks, and could take care of a big chunk of the problem.

        • Saledovil@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          I figure a ban of targeted advertisement would look like “The ads are only allowed to change once a day, and everybody during said day sees the same ads”. Whereas currently, each time you load a website, there’s an impromptu auction to sell the ad spots. (Advertisers don’t actually have to pay until you click their ad). So there would be less incentive to keep the user constantly engaged, as it would be enough if the user just visits regularly.

    • ageedizzle@piefed.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      The ban target advertising would definitely be a more realistic solution than banning advertisements in general (which some people are advocating for here). I really am not a fan of ads and would love if they were banned, but I understand that it’s not politically realistic due to what a large role they play in our economy.

  • baller_w@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    In not in favor of providing ID for anything. If a service requires it, I won’t use that service. Also, I can’t think of a verification system like this that hasn’t been bypassed or exploited, so it’s largely an exercise in futility.

    However, a compelling argument is to use your phone’s biometrics to perform a challenge and verification. Basically, your device acts as your ID so sites never have it. I think this way better than all websites to keep a copy of the identity.

    • ageedizzle@piefed.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      Using biometrics is an interesting idea. It could be similar to Apples face-scan to unlock feature, where the model of your face never leaves your local device but can still be used as two factor authentication to access your banking, for example.

      • baller_w@lemmy.zip
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Exactly this. If I had to chose between hundreds of third party websites having my ID and my phone, I’ll take my phone.

        We already have very sophisticated ways of validating payment and passport information with our devices. Validating age could be as simple as a registration procedure between the device and the identity issuer , validating the device is held by a person “of age” and then that’s it. If that user successfully completes a biometric challenge, then allow the activity.

        So web browsing goes from “I’m John Doe and here’s my ID proving it” to every site (which has HUUUUUGE PRIVACY ISSUES) to “This anonymous user is over 18; this one is over 21, this one’s not”.

        Also, if this behavior of forcing websites to ID you continues, it will enable a renaissance in data mining. Right now companies see “actor is in ZIP code 90210; rain in the forecast “ and put the two together to show “maybe they need a new slicker”. That’s simplified of course, but that’s basically the trick. You can use hundreds or thousands of these data points to paint an ever clearer picture of the person, but you never know exactly who they are. These ID laws are changing this rapidly.

        This also has the potential to be used for some very dark purposes. Example: said something on Instagram critical of the US President? You don’t get to vote because of some label.

        My position is still if the site or service requires my ID, then I don’t need it that badly.

        • ageedizzle@piefed.caOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          Yeah. The OS based biometric model of verification definitely has some advantages over a service-by-service form of verification (so long as it’s done in a way that doesn’t make it easier to fingerprint based on device). The biggest concern I’d have though would be what this might do to niche operating systems, like Linux distros or Graphene OS. Will they be forced to enable age verification as well, and if so will they have the means to do that?

          The comparison to credit card verification is interesting though and intuitively it seems like it would make it easier for niche operating systems to manage these requirements, since they could largely outsource that functionality (in the same way most websites outsource the handling of credit card information). This model still might make it easier for governments to profile people though. I’d be interested to hear what a privacy expert has to say about the viability or tradeoffs with a model like that.

  • UnspecificGravity@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    By getting rid of shitty corporate social media that makes money by exploiting people.

    This is like suggesting that the solution to protecting your kids from tigers roaming the street is to lock them in their rooms. Nah, just rid of the fucking tigers.

    • ageedizzle@piefed.caOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      As long as corporate social media is closed source, it would be hard to know if a no-advertising policy is being fully adhered to. A good example of this is the class action lawsuit against Chrome’s incognito mode: for years, Chrome got away with collecting personal browsing data when people browsed in incognito mode despite insisting that they didn’t do that. Something similar might happen with social media. To get around that, there could be a legal requirement for social media to be open source. That might run into issues with intellectual property law though, and the lobbying against it would be so intense that I’m not sure if a law like that would ever pass without massive political will.