She's All Chat
Before we get into it: if you are new to this conversation and/or need a refresher (and no shame if so, this is all new and moving very quickly) two of the best places to start are:
Like many technologies, ChatGPT and other Large Language Models (LLMs) are disruptive. If we think about AI as performing human tasks, it is not hard to see where LLMs or other generative AI models could replace human writers or artists. While it is easy to anthropomorphize an LLM and say that it is doing the task of writing, a more accurate description would be to say that LLMs are designed to predict responsive text to any given prompt. The math behind LLM outputs is determined by probability, specifically the probability that adjacent words are plausible in the given context. The problem that LLMs try to solve could be summarized as the following: what should the next word be–given words in the input and words already used in the output?
Because ChatGPT has been trained on billions of human-written texts, the tool is optimized to predict plausible-sounding answers to the questions or requests that users give it as input data. However, because the math is focused on the probability of plausible next words, ChatGPT and other LLMs have a tendency to produce incorrect information with the window dressing of plausible words. The LLM does not have a rationality model that it uses to determine if information is correct, it is only concerned with whether the next word it produces is probable given patterns of text that the training model received in the past. There are a number of use cases where ChatGPT and other LLMs can be an incredibly meaningful and useful pedagogical tool, but anyone using LLMs should be aware of the large grains of salt they should take with knowledge gleaned exclusively from LLM responses.
With the caveats concerning accuracy out of the way, I wanted to highlight a few use cases that I have found that may be useful for LLMs in an educational context. The New York Times recently published testimonials about use cases teens themselves have identified. While some expressed they used ChatGPT for essay writing, others suggested that chatting with the LLM improved their vocabulary, helped summarize information about a topic they did not understand from class, and allowed them to experiment with code to solve problems in new ways. The organization behind ChatGPT, OpenAI, has also put together a resource to help educators identify risks and opportunities for education use cases.
I personally have found that asking a prompt to explain the difference between two related concepts (say, SSL and TLS internet protocols) provides incredibly useful summaries for distinguishing similarities and differences. I have also used ChatGPT as a starting point for research, asking the model to list and summarize the holding in major Supreme Court cases on a given topic. While follow-up research is always advisable to ensure the fidelity of information, the LLM provided an incredibly helpful starting point for my research, comparable–if not superior–to the use of a search engine or Wikipedia page. One feature I would also like to highlight is the way that ChatGPT allows users to continue providing prompts that can probe an issue further or ask for clarification where output text provides too little context or background information. When I was a student I often found that my questions could be irrelevant to the rest of the class or I would find that I was stuck trying to figure out something that other students understood intuitively. A tool like this would have been invaluable for clearing up questions or confusions that I had as a student without placing an additional burden on my teacher.
Because ChatGPT is available for free online, many students will inevitably use this tool, but teachers may also benefit from this opportunity by meaningfully integrating LLMs into a lesson or by teaching students about appropriate and inappropriate uses of LLMs. I, for one, believe that there are many incredible–and pedagogically sound–use cases for LLMs that educators will find in the coming years. Just remember to tell students to double-check outputs to make sure the knowledge they take away from an LLM is backed up by reputable sources.
How Young is Too Young for Social Media?
For the second year in a row, President Biden called for stronger protections for kids online in his State of the Union speech. Read FPF’s statement following the speech, and The White House fact sheet for more detail about the Administration’s vision, including support for “safety by design” regulations at the heart of the UK and California Age-Appropriate Design Codes and similar bills that have now been introduced in a number of other states. For more background on Age-Appropriate Design Codes, read an analysis of the California AADC and a comparison of the California and UK codes by my colleagues Bailey Sanchez and Chloe Altieri.
While President Biden’s comments about kids’ privacy largely mirrored last year’s State of the Union, the conversation about kids on social media took a notable turn when the Surgeon General, Vivek Murthy, told CNN that he believes 13 is “too young” for kids to be on social media. His comments were quickly endorsed by Colorado Senator Michael Bennett and shared by FTC Commissioner Bedoya. Days later, a House bill to ban kids under the age of 16 from social media platforms was introduced by Rep. Chris Stewart (R-Utah) and a similar bill has been filed in the Senate by Senator Josh Hawley (R-Missouri). A separate bill that would ban TikTok nationwide (regardless of age) has been introduced in both the Senate and House.
The discussion continued the week after the State of the Union when the Senate Judiciary Committee held a hearing about protecting kids online. While it is still very early in this congressional session, it does seem like there continues to be bipartisan interest in advancing kids' privacy legislation in the Senate; NPR’s headline following the hearing read, “Senators talk about upping online safety for kids. This year they could do something.” For some context on last session’s child online safety legislation look no further than this policy brief comparing four of the top proposals.
One other bit of federal news: The Hechinger Report writes that funding included in the omnibus bill at the end of the year means that a ‘DARPA for education,’ “a major step toward developing, for education, the federally-funded research and development capabilities that have long existed in other fields,” may finally be happening.
Improving K-12 Cyber-Resilience and Breach Notification
In late January, the federal Cybersecurity & Infrastructure Security Agency (CISA) released a report and toolkit for K-12 institutions to help them better protect against cybersecurity threats. This guidance will be essential to ensuring K-12 cyber-resiliency in the face of growing threats that target educational institutions.
A recent ransomware attack has also kicked off an important discussion about breach notification in a school context. In September of last year the Los Angeles Unified School District (LAUSD), the nation’s second-largest school district, fell victim to a ransomware attack. Recent reporting by The 74 suggests hundreds if not thousands of sensitive mental health records for students and former students were breached and subsequently published on a “dark web” leak site. The 74’s reporting, which included analysis by FPF’s own Jim Siegl, highlights the lack of breach notification and transparency requirements in the primary federal privacy law, FERPA. Given the sensitive nature of this data, the report raises important questions about when, how, and in what detail individuals should be notified that personal information was stolen, especially in the context of former students who may no longer have contact with a school district.
The Youth Mental Health Crisis
My colleague Bailey took an extensive and important look at the student mental health crisis in last month’s newsletter, and I wanted to highlight a few developments since that time.
To start: President Biden spoke at some length about expanding mental health care in his State of the Union Address. While his remarks were light on policy specifics, he called on lawmakers to do more on mental health, adding, “[w]hen millions of young people are struggling with bullying, violence, trauma, we owe them greater access to mental health care at their schools.” Since then, the CDC has published alarming new data that underscores the scope of this crisis, finding “unprecedented level of hopelessness and suicidal thoughts among America's young women” including that 57% of teen girls reported feeling "persistently sad or hopeless” and 30% have seriously considered suicide.
In a late January speech, New York City Mayor Eric Adams announced what he called “the biggest student mental health program in the country,” access to telehealth services for all New York City high school students. While the details of the program are not yet clear, the initial reaction was generally “cautiously optimistic.”
Across the country, Los Angeles County made a similar announcement, offering access to mental health services to the 1.3 million K-12 public school students through a partnership with school-based telehealth company Hazel Health. The two-year program will be funded by $24 million from the state. “But having access to resources — or feeling that your district can deliver services — doesn’t guarantee that everyone will use them,” as there is still a stigma around seeking mental health support in some schools, or students may not realize they need it. Another big question facing student mental health programs remains their sustainability; many schools have funded expanded mental health services with COVID relief dollars that will expire in September 2024.
There is a lot happening in youth and student privacy at the state level. While this newsletter is not the venue for an exhaustive list, I’ve noted a few highlights below here and my colleagues Bailey Sanchez and Chloe Altieri would be happy to discuss these - or any other bills - with you further at a good time.
While this is not a legislative policy issue, the Florida High School Athletic Association recently voted to remove a question from its pre-participation physical evaluation form about a student athlete's menstrual history. The Association added a field that asks students to list their “sex at birth.”
And while Florida HB 591 does not define what a social media platform is, the bill would require social media platforms to disclose information such as the use of “addictive design features” and provide educational resources and information such as screen time and parental settings.
Our analysis of proposed social media legislation in Utah focuses on three areas: parental consent under COPPA, age verification (now removed from the House bill), and strengthening privacy protections for children and teens. The original House bill, Utah HB 311, was substituted for a new bill on February 9th that removed age verification and parental consent requirements. The substitute bill, however, still contains a notable private right of action with a rebuttable presumption that addiction, financial, physical, or emotional harms faced by users under 16 were caused by using or having an account on the social media platform. This stripped-down HB 311 was subsequently passed in the Utah House on the same day as the substitution. A completely separate Senate bill, SB 152, which would still require social media companies to verify the age of Utah resident users and would require guardian consent for users under 18 to have an account, passed the Utah Senate on February 21st. It remains to be seen how the two chambers of the Utah legislature will proceed given the substantial overlap of these competing proposals.
A bill (SB66) that would require Arkansas residents to use a "Digitized identification card" in order to view pornographic content recently passed the state senate. Read why “many privacy advocates worried, and some researchers are warning about unplanned ripple effects of its implementation” of a similar law that went into effect in Louisiana on January 1.