Before one can determine what constitutes best practices, however, the nature of the Internet must be considered. First and foremost, it must be understood that the central role of the Internet is to enable access to information and freedom of expression. Second, it must be understood that the Internet is a dynamic and interactive environment. Users are generating, uploading, and sharing content with both individuals and communities. And third, the Internet industry is very diverse ranging from large global providers to small locally run services.
As stated by a 2009 report on improving the state of online safety and Internet literacy from The Berkman Center for Internet & Society at Harvard University (view report)
A technology or combination of technologies designed for one environment or for use by one type of service provider may not be able to provide the same level of effectiveness in a different context. Each site has its own unique architecture, equipment, and operations, so integration of new software requires careful planning and testing in order to avoid unintended consequences or even site outages. Thus, any technological approach must be appropriately tailored for the context in which it operates, given the wide range of services on the Internet.”
Furthermore, “Some technologies may offer improved safety, but may have harmful public policy consequences and unintended consequences for youth and parents that outweigh the safety improvement. A balanced perspective is particularly critical in light of the Internet's central role in enabling freedom of expression and access to information from around the world. (Berkman, p.27)
As acknowledged by Berkman and stated here in other sections, the quest for improvements in online safety is occurring among an interconnected web of interests that are focused on various Internet behaviors, threats, and activities. Therefore, as Berkman states, “A combination of technologies, in concert with parental oversight, education, social services, law enforcement, and sound policies by social network sites and service providers” is needed to help children mitigate the risks and actuate the benefits of the Internet.It should be easy for parents and others to find clear and simple explanations of what information and safety elements exist, how they function, and what a user can do in various circumstances. Therefore, best operating practices should:
These best operating practices should be crafted so that they can be:
Ensuring children's online safety is a difficult and complex task that calls for input from and action by a wide variety of stakeholders. There is no “silver bullet”—no single technology or approach that has proved effective. Rather, what is required is:
Basic information and education about the digital landscape must be in place and available to all children, parents, educators, and caregivers so they can understand the various risks, what constitutes appropriate behavior in different online spaces, and what options they have in services and terms of use. In addition, children need to learn how to use the technology efficiently, effectively and ethically so that they can participate fully in social, economic and civic life in the digital age. Best Practices should also encourage and empower parents, educators, and caregivers to understand the technology so they can make informed initial and ongoing choices for their children's safety and security.
1.1 Provide access to information that will educate parents, educators, and children about media literacy and ethical digital citizenship, and help them think critically about the content consumed and created on the Internet.
1.2 Make safety information for users, parents, and caregivers prominent, easily accessible, and clear.
1.3 Provide information that is easy to find and access from the home page, available during registration, and that can also be found in other appropriate places within the Web site or service.
1.4 Include specific information or FAQs about the services offered by the provider, especially safety tools and how to use them (e.g., conducting a safe search, setting filtering options, defining and setting appropriate privacy levels).
1.5 Provide links to additional resources that offer relevant safety and security information.
1.6 To make messages about online safety clear and easily recognizable to a variety of users, consider using consistent themes, and common words and phrases. Provide messages in multiple languages as appropriate.
2.1 Provide a clear explanation of how information collected at registration and set up will be used, what is public vs. private on the site, and a user's ability to modify, hide, and prevent access to user information.
2.2 Make safety information available during the registration process, prominent from the homepage and in appropriate places within the service (e.g. welcome email/message, point of sale information).
2.3 Provide information in the terms and conditions and elsewhere that defines acceptable behavior, states that users are not anonymous and can be traced, and details the consequences of violating the standards of behavior.
2.4 Provide notice that violating terms or conditions will result in specific consequences, including legal ones if required.
The task force acknowledges that the issues of identity authentication and age verification remain substantial challenges for the Internet community due to a variety of concerns including privacy, effectiveness, accuracy, and the need for better technology in these areas. The Berkman report, for instance, concluded that:
Age verification and identity authentication technologies are appealing in concept but challenged in terms of effectiveness. Any system that relies on remote verification of information has potential for inaccuracies. For example, on the user side, it is never certain that the person attempting to verify an identity is using their own actual identity or someone else's. Any system that relies on public records has a better likelihood of accurately verifying an adult than a minor due to extant records. Any system that focuses on third-party in-person verification would require significant political backing and social acceptance. Additionally, any central repository of this type of personal information would raise significant privacy concerns and security issues.
Best Practices in this area should recommend how technologies can be used to define and control a child's digital activities and help parents establish the technology structure that they determine best meets their family values and needs as children grow and become more self-sufficient.
4.1 Initially set defaults at a moderate level as a minimum, but instruct users in how to customize settings for their own needs.
4.2 Information should be provided about company policy on filtering, including the default settings, explanations of the meanings of different safety, security and filtering options (e.g., what is blocked by certain levels of filtering), how to make adjustments, and when settings might need to be reapplied (e.g., a new version).
4.3 Consider carefully the placement and highlighting of sites belonging to and designed by children and youth (e.g., a child's profile page could become a “safe zone,” don't locate children's content near ads for adult-targeted materials).
4.4 Consider a “walled garden” approach when relevant with products aimed at children eight years of age and younger.
To provide the best response to problems, we recommend:
6.1 Have in place a robust procedure, backed by appropriate systems and resources, to handle complaints. Ideally, each company should have an Internet-safety staff position or cross-functional team charged with supervising the procedures and resources and given the authority and resources to be effective.
6.2 Provide a reporting mechanism visible from all relevant pages or sections of a site or service.
6.3 Consider providing a designated page with relevant information and instructions about how to submit a report or complaint including: