The Kids Online Safety Act (KOSA) is a Senate bill designed to improve internet safety for young users, especially as it relates to social media. A companion bill was recently introduced in the House of Representatives. If passed, this bill would represent one of the biggest changes to internet regulation for children since the Child Online Privacy Protection Act (COPPA). Its success or failure has significant implications for social media platforms and parents alike. Having undergone multiple revisions, KOSA has addressed some of its earlier shortcomings such as mandating age verification for platforms. Remaining issues include its overlap with existing parental control options and efforts and concerns over how regulators would choose to enforce the legislation.

Studies on Kids and Social Media

Proponents of bills like KOSA often discuss the relationship between adolescent social media use and a decline in mental health as if there is scientific consensus on the topic. However, research paints a different picture, where the results are far less clear than one would expect given lawmakers’ strong opinions on the matter.

A new report called “Social media use and its impact on adolescent mental health: An umbrella review of the evidence,” published in the journal Current Opinion in Psychology, examined a sample of 25 reviews of studies on teenage social media use and its impact on mental health. Results show that most studies found weak to moderate associations between social media use and negative impacts on well-being. In addition, studies were inconsistent as to how strong these associations were. More research is needed to justify sweeping legislation like KOSA.

COPPA vs. KOSA

While both COPPA and KOSA focus on internet protections for children, KOSA would be far more expansive than COPPA. COPPA was written with data privacy in mind, as it prohibits companies from knowingly collecting data from children twelve years and under without explicit parental consent. For companies that do knowingly collect information from children, they are required to post notices on their websites that describe how they use that data. They must also provide parents with a means of requesting what information was collected from their child.

In contrast, KOSA seeks to address potential harm, including protecting children against exploitation and bullying as well as from content that some believe exacerbates mental health problems. The bill imposes a duty of care on online platforms, meaning companies would be required to take reasonable steps to design their websites around the safety of minors.

In addition to establishing rules over data collection, KOSA also attempts to protect children from predators online. It aims to accomplish this by limiting the ability of other users to communicate with minors as well as keep people they have not added from viewing their profile.

In general, KOSA works in conjunction with, rather than as a replacement of, COPPA. COPPA covers data privacy while KOSA would cover design choices on the part of the developer. Both, in theory, limit harm to minors.

Parental Control Options

A key function of KOSA is the mandate that platforms make available parental control options for minors and have mechanisms in place to report harm. However, this is largely duplicative of existing policies that have already been adopted by many major social media companies.

KOSA would require any platform that knowingly has accounts operated by minors to implement certain safety features. These include everything from privacy safeguards to prevent strangers from messaging minors to an opt out option for algorithmic data collection. KOSA also mandates that parental controls be enabled by default and that there be easy-to-use tools for parents to delete their child’s social media accounts.

Most social media companies already provide parents with the option to moderate their child’s online activity. For example, Meta has several tools available on its family center webpage for Facebook, Instagram, and Meta Quest virtual reality system users. These tools range from content filters designed to intercept inappropriate messages to options that limit the visibility of likes and notifications.

Some apps have versions of the service specifically designed for young users that are siloed from the general use app. The biggest example of this is YouTube Kids, which is designed to ensure an age-appropriate experience for users as well as allow parents to monitor what their children are watching. While not a perfect solution, as inappropriate videos occasionally make it through the automated content filter, YouTube Kids does offer parents the option to block videos unsuited for their families or flag them for review if they think the video is inappropriate for the platform.

Beyond what platforms themselves offer, there is a plethora of other free tools available to help protect children online. Many of these come with the browser or device being used, such as Google’s SafeSearch or Apple’s parental controls for iPhones. Other, more advanced options can also be found online such as parental control apps that allow for constant monitoring of their child’s phone as well as options to control different kinds of content.

Most of the mandated safety options proposed by KOSA are already available for parents either through social media platforms, web browsers, or devices. This makes them redundant and potentially burdensome to companies.

Issues With Enforcement

Under the current version of KOSA, enforcement would largely fall to the Federal Trade Commission (FTC), which is tasked with regulating deceptive and unfair trade practices, and to state attorneys general, who are responsible for protecting state residents.

The latest version of the bill makes social media companies liable for design features that could harm minors. This even includes mechanisms designed to keep users on their platforms for longer periods of time, such as videos that play automatically. The Electronic Frontier Foundation argues that the features listed in KOSA are so overbroad that it is not clear when some features, like notifications, would be considered harmful.

By being so broad, KOSA unnecessarily gives the FTC and other regulators the power to enforce the law against content that is otherwise perfectly legal. An enforcement mechanism is necessary for the success of any law, but policymakers must be weary of how this mechanism might be abused.

Age Verification

KOSA itself is a mixed bag when it comes to age verification. Rather than mandating a new age verification method, the bill calls for a study on how to best create an age verification system without sacrificing internet privacy. This has historically been a difficult problem to solve when implementing age-based content restrictions.

Currently, there are several age verification systems that can be implemented either by the platform owner or by a third-party service, each with its own pros and cons.

  • The most common system involves self-identification, where the user must either select a prompt that states they are above the necessary age or input their birthday directly. This system has the advantage of collecting very little information on the user. However, it is also easily circumventable, leaving some demanding a more robust system.
  • Requiring a photo ID or another government document allows the company, or whoever is conducting the background check, to verify the user’s identity by cross-referencing their information with state databases This makes it more difficult for someone to fake their identity. The downside to this system is that it requires users to submit documents that include identifying information.
  • Credit cards can also be used. The major advantage is that the infrastructure to protect the privacy of credit card transactions already exists and is widely trusted. While this is a plus, it can be easily circumvented by minors taking their parents’ or other third parties’ credit cards. Moreover, cards aren’t a perfect proxy for age. 
  • Biometrics and face scans are the perfect example of age verification tradeoffs. This information is hard to fake but arguably comes with the most privacy risks. Such an approach would require sharing very sensitive personal information that, if stored incorrectly, could be the target of hackers.
  • Age verification by indirect inference has also been proposed. Normally when this is discussed it involves analyzing the user’s behavioral patterns and interests to determine whether they are of age. However, even though this system doesn’t gather personally identifiable information, it still involves collecting a lot of information about the user and isn’t perfectly accurate.

There are a number of steps that policymakers can take to make these methods safer, such as utilizing a double-blind third party or prohibiting companies from permanently retaining user data. However, even with these steps, policymakers must remember that there are tradeoffs associated with any approach.

Conclusion

The primary unique aspect of KOSA, in its current form, is ascribing liability to social media for user harms. This shifts the responsibility away from individuals and towards companies. Other aspects of the legislation either replicate existing technology or request studies that should be conducted prior to established restrictions.

The American Consumer Institute is a nonprofit education and research organization. For more information about the Institute, visit www.TheAmericanConsumer.Org or follow us on Twitter (X) @ConsumerPal.

Share: