Copy
Welcome to the latest edition of FPF’s Youth & Education Privacy newsletter. I’m Jamie Gorosh, policy counsel on the Youth & Education team at FPF. My work right now is mainly focused on the regulation and use of edtech products, student monitoring, and tracking student privacy legislation.

Among other recent developments in child and student privacy, this newsletter highlights:

  • How states are approaching kids’ online safety bills
  • Activity from the U.S. Department of Education
  • “AI alone won’t solve the problem AI created”
  • An age verification update
  • More concerns about teens in the Metaverse

As we continue to refine the content and format of this newsletter, we want to hear from you - what’s on your mind, and how can we help? Reach out to us anytime by replying to this email.

“The Kids Are Not Alright”
 

Social Media and the Teen Mental Health Crisis

“Leaders in both parties convey an increasing sense of urgency to address epidemic levels of teenage anxiety, depression, loneliness and lashing out,” the Washington Post editorial board wrote, detailing the “glaring” need, “vast number of difficult-to-solve issues” that have led to the crisis, and the variety of responses that states are evaluating and implementing in response. While social media is one contributing factor, the piece highlights broader cultural challenges, citing a recent survey that showed the declining value that Americans place on things like community involvement.  “Intolerance, polarization and the demonization of the other fuel disunity in civic life…The kids are not alright.” 

The chief science officer for the American Psychological Association recently echoed this sentiment. “The youth mental health crisis started long before social media. It is not because of social media, certainly not exclusively,” he told Slate. “It has very much to do with stress, polarization, and long-standing concerns about inadequate mental health care in the U.S.” Others, including New York University professor Jon Haidt, argue social media has played a much larger role in the current crisis.

Regardless of how much of a contributing factor social media has been, there's significant - and bipartisan - interest in implementing more protections for kids online as part of a broader policy response to the teen mental health crisis. While states are considering a wide variety of options, those options increasingly fall into two general categories.

Two Emerging Legislative Approaches 

Weeks after Utah passed legislation requiring, among other things, teens to obtain parental permission to have a social media account (read our analysis, and more in last month’s newsletter), Arkansas passed a similar bill of its own. The Arkansas bill, set to go into effect in September, is “confusingly specific on what it actually considers to be a social media company,” The Verge reports. “One by one by one, the bill is all exclusions of more or less every single possible social media platform,” leaving only Facebook, Instagram, and likely Twitter that would be affected, TechDirt noted. One other notable exception between the Arkansas and Utah bills: the Arkansas bill does not contain the parental monitoring requirements (including the ability to access a teen’s direct messages) in the Utah bill.

The Arkansas and Utah bills, along with similar proposals that have been introduced in Iowa and Louisiana, generally represent one approach - which is to give parents a lot more control and responsibility for their child’s online activities. 

In an op-ed, Utah Gov. Spencer Cox compared his state’s efforts to protect kids online to the “countless” safety measures used to protect kids in the physical world, including seatbelts, fences around pools, and minimum driving and drinking ages. Responding to concerns about government overreach, he notes “the government already regulates this aspect of our digital lives…This is about taking power from companies and giving it to parents in the interest of their kids, not the government.”  A recent op-ed published in the Washington Post echoed a similar need for parenting in the digital world, calling the Utah bill “a playbook that other states - and the federal government - should follow.”

The strict age verification requirements in the Utah/Arkansas-style approach do raise privacy concerns. “There is a twisted irony in the law’s requirement that everyone upload more information,” an Electronic Frontier Foundation team member told Bloomberg. The article concludes by noting, “It shows the bind we seem to have gotten ourselves into. Laws like these, born of a deep suspicion of tech companies, would hand them even more sensitive data.”

Two 20-year-olds, the co-founders of “Design It for Us,” a newly-launched coalition of young people advocating for safer social media, wrote a piece for Gizmodo (“The Utah and Arkansas Social Media Bans Won’t Protect Us”) outlining their concerns with the bills, noting they “aren’t about tech accountability or children’s safety; they absolve … tech giants of responsibility to create products that are safe for children and instead puts the onus on parents to vigilantly monitor kids’ activity online.” The better approach, they argue, are “standards that make platforms safer, not keep us off of them” like those outlined in the Age Appropriate Design Code legislation that has passed in California and also been introduced in Oregon and Minnesota (where teens have also gotten involved). 

Danah Boyd, author of It's Complicated: The Social Lives of Networked Teens, also cautions against blaming social media alone for youth mental health concerns. “Does social media cause mental health problems? Or is it where mental health problems become visible? I can guarantee you that there are examples of both.”

California, meanwhile, is considering an additional measure, SB 287, that would fine social media companies for using algorithms that influence kids to harm themselves or others, including eating disorders, suicide, harm to themselves or others, or buying fentanyl. It would also ban algorithms that could lead to the sale of illegal guns. Some opponents of the bill argue in part that “U.S. law limits the liability of digital service providers” prompting the state to wait for a ruling from the U.S. Supreme Court in Gonzalez vs. Google, expected this summer, on the matter.

One thing to keep an eye on at the federal level: the Washington Post reports on a new, bipartisan proposal that would set a minimum age for kids on social media (13) and require teenagers ages 13-17 to obtain parental consent.

An Age Verification Update

As more states adopt privacy laws that include age verification/assurance requirements, questions about how those will work have become increasingly urgent - for example, Arkansas’ social media bill is set to go into effect in September. 

Calls for “online services [to] take a realistic, accurate, efficient and accountable view on age assurance” may be more challenging to implement than policymakers realize, especially in the U.S., where age assurance and verification is still a “growing ecosystem.” (Stay tuned for more from the Youth & Ed team on the differences between age verification, assurance, and estimation in the coming months!).

A blog post by Eric Goldman analyzing the potential conflict between mandatory age verification laws and biometric privacy laws dives into this issue further. Goldman notes, “[T]he legislatures have no idea what technology will work to satisfy their requirements. It seems obvious that legislatures shouldn’t adopt requirements when they don’t know if and how they can be satisfied–or if satisfying the law will cause a different legal violation.”

Daphne Keller, Director of the Program on Platform Regulation at Stanford's Cyber Policy Center, highlighted the challenge of age verification as part of her critique of how child safety bills “would effectively regulate content on platforms, but don’t say so.” She describes age verification as a “big, big, BIG issue... Can platforms distinguish child users from adult users without seriously undermining online privacy?” Read the rest of her thread.


Activity from the U.S. Department of Education

Third-party servicer guidance

In an April 13 blog post, the Department of Education announced that it was delaying (but not, as edtech blogger Phil Hill notes, fully rescinding) the implementation of its “much-malignedthird-party servicer guidance released earlier in the year that would have implemented new reporting requirements for many higher education edtech providers. In its post, the Department noted it had received “significant and helpful feedback in the form of more than 1,000 comments” in response to the original guidance and that it plans to publish updated guidance in the future, with an implementation deadline that would follow “at least” six months later. Those comments came from Educause and other higher education groups, with less than 1% supportive of the guidance.

Just days before the Department walked back its plans, online program management company 2U had also filed a lawsuit against the Department over its guidance, on the grounds it “has exceeded its authority and violated procedural law by implementing regulations without enough public input or reasonable time for institutions to comply.” And while 2U applauded the Department's announcement, the company said in a statement that it plans to move forward with its lawsuit.

New guidance on student health data

The U.S. Department of Education released new guidance reminding schools of the privacy policies in place related to student health records. One document provides reminders for school officials about their responsibilities under the Family Educational Rights and Privacy Act (FERPA); a second document clarifies the rights of parents and eligible students under FERPA.


“AI alone won’t solve the problem AI created”

New to the ChatGPT and education conversation? Our primer in the February newsletter may be a helpful place to start.

Using AI to detect AI

The Washington Post reports that TurnItIn, whose plagiarism detection software is used by 2.1 million teachers, released software designed to catch work generated by AI to 10,700 secondary and higher-educational institutions on April 4 (two percent of its customers opted out ahead of the launch). The Post tested the software in advance of its launch, to concerning results - noting the software got over half of the scenarios they tested (student-written, AI-generated, or mixed source) “at least partly wrong.” 

“AI alone won’t solve the problem AI created…detectors can sometimes get it wrong — with potentially disastrous consequences for students,” the Post notes, highlighting a key difference between accusations of plagiarism, where there is a source document that can serve as evidence, and of using AI, where there is no such source document. 

The company claims the detector is 98 percent accurate overall. The Post tested multiple AI generated text detectors and found Turnitin’s to be more accurate than others on the market, though the concern over false positive detection remains. Following the Post’s tests, Turnitin added a note to its score (the amount of the paper it deems likely to be generated by AI) that reads, “Percentage may not indicate cheating. Review required.”

As AI models continue to evolve at a rapid pace, how well, if at all, software designed to detect it can keep up remains an open, and important, question. In the meantime, we have “a huge problem without a solution that is putting both teachers and students in a precarious position.”

An evolving response

ChatGPT is still remarkably new - but there are some signs that the response and reception to it may be evolving from the “initial panic.” “In hindsight, the immediate calls to ban ChatGPT in schools were a dumb reaction to some very smart software,” MIT Technology Review wrote - do you agree?

EdWeek asked the ChatGPT application directly if it should be banned in schools and its response was, essentially, that it depends - noting that it can be both a valuable resource to students, and also easily abused and is not without data privacy and security risks, especially if students were to access it from their personal devices.

Schools are trying to figure that out on their own. Some have banned ChatGPT, but others are leaning in.

Khan Academy and Khan Lab School in Silicon Valley are leaning in and in a big way, announcing on the same day that GPT4 was released that it was incorporated into the Khan Academy’s interactive tutoring program. Founder Sal Khan noted, “We view it as our responsibility to start deeply working with artificial intelligence, but threading the needle so that we can maximize the benefits, while minimizing the risks.” Part of that risk minimization strategy is a series of “diligent” steps designed to protect students, parents, and educators that the organization outlined in a recent demo. And while Khan plans to share a version of the tutor, known as “Khanmingo,” with other schools, questions remain about its effectiveness in other settings.

Two updates on the other end of the spectrum. Some tech leaders have called for a “pause” in the continued development of AI technology until its risks can be better assessed. And just weeks after Italy’s privacy authority, The Guarantor for the Protection of Personal Data (GPDP), announced a ban on ChatGPT for “unlawful collection of personal data,” it announced a series of steps and safeguards that Open AI could take (by April 30) largely related to children’s data privacy that would result in the ban being lifted.


More Concern About Teens in the Metaverse

A coalition of online safety groups recently wrote to Meta CEO Mark Zuckerberg, urging the company to end its plans to allow teenagers onto its Metaverse app, Horizon Words, until it can prove the experience is “safe for their wellbeing.” The letter, which was signed by 36 organizations including Common Sense Media, Fairplay, and the Center for Countering Digital Hate, along with 37 individuals, cites risks to young people on the app including mental health and well-being, privacy, targeted marketing, abuse and bullying, predation, and more. The letter follows a similar one from Senators Markey and Blumenthal sent last month.

The Washington Post recently noted that Meta’s strategy for protecting participants in virtual reality (“empowering them to protect themselves”) is a “markedly less aggressive, and costly” approach than the one it takes on its social media platforms, which “are bolstered by automated and human-backed systems to root out hate speech, violent content and rule-breaking misinformation.”


67: The Intercept details the Georgia National Guard’s controversial plans to geotarget 67 public high schools in the state with recruitment ads.
3 in 4: 74% of teenagers reported scrolling for “too long” on social media every time they open the app or daily, according to polling recently released by Accountable Tech and LOG OFF. Only 8% of teens reported that a social media platform had never recommended they friend or follow someone they did not know. Only a slightly higher percentage (12%) reported that they had never been followed or friended by someone they did not know.
1 in 3: A third of kids ages 8-17 kids with a social media profile falsified their age to be 18+ upon signing up, according to research commissioned by Oxfam highlighted as part of the ongoing challenges around age verification that is both effective and privacy-protective.
1.12 billion: Two lawsuits filed by the nonprofit organization Ius Omnibus (“Justice for All”) in Portugal against TikTok seek €1.12 billion in collective damages. One of the cases accuses the platform of not doing enough to prevent users under 13 from using the platform without permission from a parent or guardian; the other case, specific to users 13 and over, alleges "misleading commercial practices" and "opaque privacy policies."
YOUTH & ED TEAM IN THE NEWS
My colleague Bailey Sanchez was quoted talking about youth privacy legislation in articles published by MIT Technology Review, Politico, and IAPP.
HEAR FROM OUR CEO
FPF CEO Jules Polonetsky will be discussing “The Future of Children’s Digital Privacy” at the Privacy + Security Forum Spring Academy.
RIGHTSCON PRESENTATION 
I will be presenting at RightsCon Costa Rica on the recent report we published in collaboration with LGBT Tech, Student Voices: LGBTQ+ Experiences in the Connected Classroom.
NEW TRAINING PROGRAM
FPF has a new training program that provides an in-depth understanding of today’s most pressing privacy and data protection topics

WIRED reports on a number of “outlier cases” involving the use of an administrative subpoena known as a “1509 customs summons” by U.S. Immigration and Customs Enforcement (ICE) agents involving schools, abortion clinics, news organizations, and others that have raised “serious concern.” A former DHS official noted, “It seems reasonable that ICE should be able to inspect records from a company like Amazon for customs investigations…But what possible use or authority would ICE have for subpoenaing records from an abortion clinic?”

Twitter
Website
Email
YouTube
Copyright © 2023 Future of Privacy Forum, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.