Missing Children’s Day — let’s bring them all home

Screen Shot 2014-05-25 at 10.13.26 AMSince it was founded 30 years ago, the National Center for Missing and Exploited Children (NCMEC) has helped reunite thousands of missing kids with their families.

I have had the privilege of serving on NCMEC’s board for nearly two decades and, since that time, we have seen a dramatic increase in the recovery rate thanks to NCMEC’s dedicated staff along with alert members of the public who report sightings of missing kids. From 1984 through December 2012, NCMEC has assisted law enforcement with more than 195,300 missing-child cases resulting in the recovery of more than 183,100 children, according to the organization. The  recovery rate for missing children has grown from 62% in 1990 to 97% today.

Screen Shot 2014-05-25 at 10.54.32 AMI became involved with NCMEC in 1993 during the search for Polly Klass, a 12-year-old girl who was abducted from her Northern California home, about 80 miles north of where I live. During the search I helped post Polly’s picture online. When Time magazine wrote about the effort, I was overwhelmed by requests from parents of other missing kids, which led me to contact NCMEC’s then CEO Ernie Allen who quickly realized the potential for using online tools to help find children. Even though we couldn’t save Polly, online tools such as MissingKids.com are now used routinely to help in the recovery of missing children.

Public-Private partnership

NCMEC is a non-profit organization, not a government agency, though it was authorized by Congress to serve as the national clearinghouse for information about missing and exploited children. Congress has also designated NCMEC to run the national, toll-free, 24-hour missing children’s hotline; and operate the CyberTipline for online reporting of the sexual victimization of children and inappropriate sexual content.

NCMEC is a unique public-private partnership which receives funding from both the Federal government and numerous private donors ranging from large companies individual donors from all walks of life — including young children who conduct fund-raisers or donate their pennies to help other kids.

Key facts

As you can see from the key facts below, only a tiny percentage of missing children cases involve “stereotypical’ abductions but even children who are abducted by members of their own family can be in extreme danger and deserve to be protected and returned to their lawful parent or guardian.

And it’s not just missing children. NCMEC helps prevent and prosecute cases of child exploitation, including sexual exploitation of children as young as infants. The organization also works to rescue underage victims of prostitution, helping them recover from the trauma of what is often forced or coerced and extremely traumatic exploitation by adults who profit through human trafficking.

NetSmartz workshop's "Clicky" teaches young children about Internet safety and digital citizenship

NetSmartz Workshop’s “Clicky” teaches young children about Internet safety and digital citizenship

NCMEC also operates the NetSmartz Workshop, which provides high-production value materials to help educate young people about how to stay safe on and offline.

Here are some “key facts” from NCMEC’s website.

The most recent, comprehensive national study for the number of missing children estimated in 1999: [1]

  • Approximately 800,000 children younger than 18 were reported missing.
  • More than 200,000 children were abducted by family members.
  • More than 58,000 children were abducted by nonfamily members.
  • An estimated 115 children were the victims of “stereotypical” kidnapping. These “stereotypical” kidnappings involved someone the child did not know or was an acquaintance. The child was held overnight, transported 50 miles or more, killed, ransomed or held with the intent to keep the child permanently.
  • To find the number of children missing from a specific state or territory contact the state’s Missing Child Clearinghouses.
  • The first three hours are the most critical when trying to locate a missing child. The murder of an abducted child is rare, and an estimated 100 cases in which an abducted child is murdered occur in the U.S. each year. A 2006 study indicated that 76.2 percent of abducted children who are killed are dead within three hours of the abduction. [2]
  • The National Center for Missing & Exploited Children® has assisted law enforcement in the recovery of more than 193,705 missing children since it was founded in 1984. Our recovery rate for missing children has grown from 62 percent in 1990 to 97 percent today.
  • The AMBER Alert program was created in 1996 and is operated by the U.S. Department of Justice. As of April 2, 2014, 688 childrenhave been successfully recovered as a result of the program. [3]
  • As of Dec. 2013, NCMEC’s toll free, 24 hour call center has received more than 3,899,964 calls since it was created in 1984. Information about missing or exploited children can be reported to the call center by calling 1-800-THE-LOST (1-800-843-5678).
[1] Finkelhor D., Hammer H., Schultz D., Sedlak A. National Estimates of Missing Children: An Overview, U.S. Department of Justice, 2002.
[2] Brown K., Keppel R., McKenna R., Skeen M., Weis J. Case Management for Missing Children Homicides: Report II, National Center for Missing & Exploited Children and U.S. Department of Justice, 2006.
[3]AMBER Alert, U.S. Department of Justice.


Posted in Child safety | Comments Off

Internet security is a global issue that requires global cooperation

This post first appeared in the San Jose Mercury News

PARIS — The National Cyber Security Alliance, or NCSA, is a Washington, D.C.-based organization that promotes online security and safety. Its board consists of representatives from Microsoft, Google, Facebook, Comcast and other U.S. companies, and it works closely with the Department of Homeland Security to provide security advice for American businesses and consumers. I’ve attended meetings in Washington, Pittsburg and Silicon Valley with NCSA staff, and the agenda has always focused on U.S. security issues.

NCSA, along with the Anti-Phishing Working Group, is the main force behind the “Stop. Think. Connect.” campaign, at StopThinkConnect.org, that seeks to raise awareness by encouraging people to pause and think about what they do before they “connect.” It’s kind of the cyber equivalent of the “buckle up for safety” campaign that promotes safety for motorists and passengers.

So I was a bit surprised when NSCA invited me to participate in an international online safety awareness meeting in Paris, attended by representatives of nonprofits, governments, universities and companies from several countries. The event was hosted by Microsoft and took place at its Paris office.

But I was reminded of the global nature of cyberthreats on the day we convened our meeting last Tuesday as news broke that the Justice Department, with the help of law enforcement agencies from other countries, issued indictments in connection with the Blackshades Remote Access Tool (RAT) “that enabled users around the world to secretly and remotely control victims’ computers,” according to the Manhattan U.S. attorney’s office, which said the bust involved more than 90 arrests in 19 countries.

The Blackshades RAT is malicious software, or malware, that has been used by criminals in more than 100 countries to “infect computers throughout the world to spy on victims through their Web cameras, steal files and account information, and log victims’ key strokes,” according to the Justice Department. The alleged co-creator of Blackshades, Alex Yucel, who is from Sweden, was arrested in Moldova and is awaiting extradition to the United States. Brendan Johnson, who is charged with helping to market and sell malware, including the RAT, and provide technical assistance to its users, was arrested in Thousand Oaks, California.

Blackshades provides a good example of how you could be sitting in your home in Palo Alto and be victimized by a criminal on another continent or vice versa. Thanks to botnets, where malicious software spreads itself from computer to computer without the knowledge of the machine’s owners, it’s possible for a computer from Estonia to infect your home PC and for your home PC to then infect someone else’s PC in Germany.

There are plenty of other examples of international cybercrime. I’m on the board of the National Center for Missing and Exploited Children, which regularly cooperates with counterparts in other countries to try to stem the tide of illegal child pornography across borders. John Carr, a child safety adviser to the United Kingdom government, told me that a “substantial proportion” of the illegal images that make their way to the UK come from the United States.

Privacy is also a global issue, as we were reminded last week when the European Court of Justice in Luxembourg ruled that search engines (the biggest two being U. S.-based Google and Microsoft’s Bing) can be required to delete search listings of posts, including stories in newspapers, that may be dated or irrelevant, even if they happen to be true. This ruling could not only affect U.S. companies that offer search, but also those of us in the United States and other countries who use these services, even though the delete order was issued by a court on another continent.

At the Paris meeting, the discussion turned to international cooperation, and it was generally agreed that it’s a good idea for organizations in countries around the world to coordinate at least some of their messaging because of the similarities of the issues that we all face. That doesn’t mean that a campaign that works in Istanbul will necessarily resonate in Indianapolis. But in our increasingly globally connected world, there are plenty of common themes and practices that we can share.

In an interview, NCSA Executive Director Michael Kaiser summed up the purpose of the meeting. “We are trying to reach everyone on the globe because we’re all connected to the same Internet and, unless we’re all safe and secure, we won’t have a safe and secure Internet.”

Posted in Child safety | Comments Off

Facebook changes new user default privacy setting to friends only — adds privacy checkup

This post first appeared on Forbes.com.

Until now, when new Facebook users sent out their first post, the default setting was public, which means that anyone could see it. It’s long been easy to change the audience to Friends only but if you didn’t know about that option, you could have accidentally told the world what you meant to only tell people you know and trust.

Disclosure: Facebook is one of the companies that provides financial support to ConnectSafely.org, the non-profit Internet safety organization where I serve as co-director.

Change for new users only

Facebook is changing the default for new users so that, going forward, the default setting is Friends. The change will have no impact on existing users. The first time someone posts, they will see a reminder to choose an audience for that post and if they don’t make a choice it defaults to Friends.

In a statement, Facebook said that “We recognize that it is much worse for someone to accidentally share with everyone when they actually meant to share just with friends, compared with the reverse.”

Changeable but sticky

 You can easily change the audience of each post and once you make a change it becomes sticky, which means it remains that way till you change it again. So, if you normally post to friends and decide to post something to the public, your subsequent posts will also be public until you change it back to friends only.

That stickiness is important to remember. If you normally post just to friends and decide to post something publicly, you must remember to change the setting back to Friend the next time you post or your posts will remain accessible to the public.

Privacy checkup

Facebook is also launching a “privacy check-up” to enable users to review their privacy practices and settings such as “who they’re posting to and the privacy settings for information on their profiles,” according to Facebook. It also helps users review which apps they’re using and “the privacy of key pieces of information on their profile.”

Earlier App privacy changes

At the F8 Facebook developers conference last month, Facebook CEO Mark Zuckerberg announced changes to its app privacy policies including allowing people to interact anonymously with apps (Facebook knows who you are but you have the choice about whether to reveal your identity to the app developer). Facebook is also providing users with more control over what information they reveal to apps as well as more control over what others can share about them via apps.

Steps in the right direction

I have to give Facebook credit for giving users more control over their privacy and changing the default from public to friends. Its always been possible to control your privacy on Facebook but it’s often been too complicated — especially for new users who could so easily to a broader than intended audience. I’m not entirely sure what motivated these most recent changes but I suspect they will be welcomed by users.

The privacy checkup is another important step. One of the biggest complaints about Facebook privacy is that users don’t know what is out there that others can see and may not be aware of how to control who has access to their content. Of course, this isn’t the first time Facebook has sought to simplify its privacy settings. There have been numerous changes over the years including, for example, the addition of an activity log, a couple of years ago, that helps people uncover what they’ve posted and what’s been posted about them.

There is still more work to be done in terms of educating users about their privacy and how to limit what people can see about them on Facebook and beyond. Facebook also needs to do more to educate people about how their personal information is used to direct advertising.

Posted in Child safety | Comments Off

Experts say UK’s optional filtering equals choice, not censorship

Internet Matters website part of educational campaign that accompanies optional UK filter

Internet Matters website is part of educational campaign that accompanies optional UK filtering

Under pressure from Prime Minister David Cameron and other British officials, major UK Internet service providers are now requiring customers to opt-in or out of Internet filters that affect every device in the home. And, starting this week, most public Wi-Fi networks in the UK will have porn filters so kids can’t get around parental protections when away from home and adults can’t access porn with their coffee at public places.

In the mean time, the UK’s four major ISPs — BT, Sky, TalkTalk and Virgin — have invested millions in an educational campaign centered around a new website called Internet Matters. 

The opt-in filtering policy requires broadband customers to make a choice as to whether they want filtering for all devices connected to the home router. The actual details depend on the ISP—each has its own implementation. But if a customer opts-in, it will affect all devices, which means that if you set your filters for content suitable for young children, everyone in the household, including older teens and adults, will have the same filter, unless it is turned off by the adult in charge.

Writing in the Guardian, Laurie Penny, who describes herself as a journalist and feminist activist, argues that “In the name of protecting children from a rotten tide of raunchy videos, a terrifying precedent is being set for state control of the digital commons.”

But even though the Prime Minister was pushing for these controls, it’s not in fact state control and unlike the filters that are in place in countries like Turkey, Bahrain, Saudi Arabia as well as “the great fire-wall of China,” UK filters are not mandatory. ISP customers have a choice as to whether to use the filters and three of the UK’s leading Internet safety advocates – all of whom I work with closely and know to be sensitive to issues of free speech – feel that it’s not censorship, but a way to give parents more control over the content that comes into their home.


Childnet International’s Will Gardner

“I don’t think it’s censorship,” said Will Gardner, head of Childnet International, a London-based non-profit that delivers technology educational programs throughout the UK. “It’s a really good approach because they’re giving people a choice,” he added. “Nobody is saying that every house needs a filter, but people need to think about what is going to work best for their situation.”

UK government advisor, John Carr

UK government advisor John Carr

John Carr, an Internet safety advisor to the UK government, calls charges of censorship “a ridiculous idea.” He points out that “Everything that is accessible on the Internet today remains accessible tomorrow. Nobody is deleting, removing or altering anything. People are simply being asked to say whether or not they want to use the filters they are being offered by their ISP. They can say ‘yes’ or ‘no.’ Not a big deal.”


Dave Miles – heads FOSI in Europe & Middle East

David Miles, the director for Europe, Middle East & Africa for the Family Online Safety Institute (FOSI), is cautiously optimistic that the implementation of the new whole-home filtering solutions will give parents the choice and the tools they need to control the content that is coming into their homes. “The filtering is predominantly applied to content that is inappropriate to minors, so concerns of censorship are somewhat of an overreaction in my view,” he said.

Of course there is nothing new about filtering. Parents in the U.S., UK and other countries have long had the ability to install filters on home PCs using software that they could purchase or download for free from companies including Microsoft, Symantec and many Internet service providers.

But a big difference with the UK approach is that it affects the entire home Internet connection, which means the setting will affect all wired and wireless devices connected to the home router including PCs, game consoles, tablets and even WiFi connects phones. Mobile devices connected to the cellular network are not affected but UK cellular companies already content by default with the option to opt-out.

The obvious problem with the UK approach is that it’s one-size-fits-all so a household of adults, teens and young children would have a single setting – likely to protect the youngest children – that will affect all users.

“It would be much better if every family member had their own individual log in with an associated age-appropriate profile,” said Carr, who told me that he repeatedly said that to the ISPs but they were “not prepared to go down that route.”

But, said Carr, allowing parents to configure multiple profiles “would have made implementation much more complex and costly.” He called the ISP’s strategy an experiment, “No one has a textbook full of tried-and-tested answers. This is innovation, something the hi-tech industries are meant to be good at.”

Miles agrees. “I think the new whole-home filtering should be seen as a start point. I would suggest that parents do an audit of the number of Internet-enabled devices they have in the home.” He suggests that parents of kids of varying ages consider installing device-side filters instead of using the whole-house approach.

As Gardner pointed out, there are a “number of obstacles to traditional device filtering” including “cost, ease of use and knowing they exist.” The other problem is that there are now multiple devices in many homes including PCs, tablets, game consoles and even Wi-Fi-connected media players like the iPod Touch. Without a router-based solution, parents would have to install and configure filters on each device.

The American experience certainly validates Gardner’s observation. Parental uptake of device-side filters has always been pretty low but one notable exception was when the then-mighty AOL automatically enrolled children in a filtered “Kids only” experience based on the child’s age. Parents could opt out of filtering, but most didn’t.

Public Wi-Fi filters

Kids trying to get around parent-enforced content filtering won’t be able to do so from most public Wi-Fi hotspots now that also block sexually explicit and other inappropriate content. In addition to blocking kids’ access, these filters prevent people nearby from having to look at potentially offensive content.

Even in the U.S. there are many public Wi-Fi services that block pornography and other adult-only sites. I discovered that at a café near my home in Silicon Valley last July when I was shopping for block ice for my Fourth of July party. The only store that carries it in my area is a wine shop but when I went to their site to find their location and hours, I was blocked because the site promotes the use of alcohol. Ironically, there are plenty of nearby billboards that also promote alcohol and no filters to prevent kids from viewing them. The café’s filters also prevented me from gong to a site about “wearable technology. “

How the UK got here

Over the last few years the UK has seen some highly publicized cases of child endangerment, which, ironically, had essentially nothing to do with kids’ access to porn. Yet, the October, 2012 murder of 5-year-old April Jones, whose killer had accessed child abuse images just hours before her death, touched off a public outcry for all sorts of measures to protect kids online. Another tipping point, according to Miles, was the revelation, after the death of BBC television personality Jimmy Savile, that he had sexually abused hundreds of children over a period of decades. “The Internet had nothing to do with Savile’s crimes,” said Miles, “but caused people throughout the UK to think about sexual crimes against children and the need to do everything possible to prevent further abuse.”

It’s also true that keeping porn away from kids would not have prevented the murder of April Jones but these two cases, said Carr, contributed to a general consensus that something had to be done to protect kids in this digital age.

Prime Minister Cameron, who is a father of three children, became directly involved in pushing for a number of child protections including not just filters but also a commitment from Google and other search engines to block access to search terms likely to lead to child pornography.

Despite its similarities to the United States, the UK remains a unique situation. For one thing, said Miles, the country’s media often takes a strong advocacy position and the press has focused heavily of late on Internet crimes against children.  On the positive side, there is also widespread cooperation and conversation among stakeholders. The UK Council for Child Internet Safety (UKCCIS), which was established in 2008, brings government, non-profits and industry together and, said Miles, “has enabled these initiatives to flourish.” Carr, Gardner and Miles are all on the UKCCIS board

Thoughts on filtering

I do think it’s a good idea for parents to have a choice about whether to use Internet filters and agree that parental control is not the same as government censorship. However, I also think it’s important for parents to think about why they are turning on a filter and — if they do, understand that it’s not a be-all solution. It’s also important to discuss critical thinking, reputation management, treatment of others and media literacy with your kids. If you use filters, think about how and when to loosen the reigns because, eventually, they’ll grow up to be adults, leave home and it will be up to them to protect themselves.  As I aruged in an earlier post, digital citizenship and media literacy beat tracking laws and monitoring

I also agree with my UK Internet safety colleagues concern about the ISP’s two sizes (On and Off) approach. The needs of a 5-year old are a lot different from a 17-year old which makes a household-wide filtering system a bit inconvenient.

We used filters for a short time when my kids were young but turned them off after my son complained that he was being blocked from some gaming and music sites. Instead we relied on family policies and an occasional parental peek into the browser history to see what was going on. I’m not claiming that my kids never saw porn – that would be highly unlikely – which is why I wrote What to do if your kids are looking at porn.

Internet matters

In addition to requiring parents to opt into or out of filters, leading British ISPs have banded together to fund a nationwide education campaign called Internet Matters. Its website has information on age-appropriate strategies and technologies but the ISPs are also committed to providing education and advocacy through other media, including advertisements. The reported budget for this program is £75 million ($127 million) but, said Carr, most of this is based on estimates of ad spending that the companies may be doing anyway. He said there are approximately £6 million ($10.2 million) of new money dedicated to the non-profit organization set up to run the program, which is still a massive investment compared to any Internet safety campaigns I’m aware of.


Posted in Child safety | Comments Off

Do kids need special protections or do we all need them?

This post is a work in progress and subject to editing and updates

by Larry Magid

We have always taken child safety more seriously than adult safety, even where children are not necessarily at greater risk than adults. It’s probably instinctual. As every parent knows, protecting one’s offspring is not something we even have to think about. We just do it.

Children not always a special case

But when it comes to risk, children are not always a special case. For example, in California it’s against the law for children under 18 to ride a bicycle without a helmet yet, every year, nearly 17,000 adults are killed or injured in bicycle accidents in the U.S., compared to about 2,200 children. More than 88% of the injured are adults even though adults make up 77% of the population, which means that people over 18 are more likely to be injured than children.  I think of this every time I ride my bike through the Stanford University campus near my home and notice all those over-18 people putting their expensively educated brains at risk by legally riding without helmets.

Bullying doesn’t just affect kids

The same can be said of other types of risks. Workplace bullying among adults is higher than school-yard bullying of kids. 2010 study commissioned by the Workplace Bullying Institute and conducted by Zogby International found that more than a third (35%) “have experienced bullying firsthand.” That’s higher than any of the credible studies of youth bullying. Plenty of adults are also affected by cyberbullying (adults sometimes call it “trolling”) as well.

Why are privacy laws aimed at kids

There are all sorts of existing and proposed privacy laws to “protect” children, yet there is no evidence that kids have any more privacy risks than adults. In fact, adults may have even more privacy risks given what they do online and the potential payback for being able to sell them products based on their online activities. When it comes to government surveillance, nearly all the victims are adults. And from what I’ve been able to observe in social media, adults are not typically more privacy savvy than teens when it comes to what they post. In fact, a 2013 Pew Research survey found that teens are more privacy conscious than many adults give them credit for.

Yet the federal Children’s Online Privacy Protection Act (COPPA) is aimed only at kids under 13 while California’s “eraser button law” affects only people under 18.

COPPA requires “verifiable parental consent” before a child under 13 can provide personally identifiable information (including IP address) to a commercial service. The well meaning law is designed to protect children’s privacy, but it’s unintended consequence is to ban children from social media sites because of the enormous cost, hassle and, ironically, privacy and security risks associated with obtaining that consent. COPPA also discriminates against children who’s parents, for a variety of reasons including limited English-language or technology skills, and fear of government are unable or unwilling to comply with COPPA.

Congrss is considering yet anothr law, The Do Not Track Kids Act (see Anne Collier’s analysis here), that assumes that young people are at a higher risk because “Children and teens lack the cognitive ability to distinguish advertising from program content and to understand that the purpose of advertising is to persuade them, making them unable to activate the defenses on which adults rely,” but that’s lumping together the age range of zero through 17. It may be true of very young children, but I haven’t seen evidence to suggest that teen Internet users fail to understand the difference between advertising and content any more than do adult users. 

Rational laws could protect everyone

To the extent that we need new laws to protect privacy, they should be aimed at everyone, regardless of age. Senior citizens need protection, young adults need protection, parents raising families need protection as do workers at every age. We all need greater transparency so that we know (and are able to understand) how our data is being used and we need laws that limit what government can collect or do with data that companies collect.

Failing to see the bigger picture

“When we think about kids online or off, we tend to automatically overestimate the dangers, because that’s all we ever see in the media, said Lenore Skanazy, author of the book and blog, Free Range Kids. What’s more she added, “we also don’t see the billions of friendly, helpful or informative chats kids have online daily — only the cyberbullying and sexting. Our fears don’t match reality.” Skanazy is observing the disproportionate amount of media attention on risk to youth.  Even the recent incidences of school violence are often interpreted out of context. Data from the U.S. Bureau of Justice Statistics shows a general decline in school related homicides between 1992 and 2011.  

Reflecting on the the past 20 or so years, Crimes Against Children Research Center David Finkelhor, in his paper, The Internet, Youth Safety and the Problem of “Juvenoia,” observed an overall positive trend in most youth-related risks including crime victimization, sexual assault and even bullying, causing him to conclude, “Given the convergence of positive indicators regarding children, there is a good chance that we will look back on this era as one of major and widespread amelioration in the social problems affecting children and families.”

Screen Shot 2014-05-02 at 8.30.13 AM

Homicides of youths ages 5-18 at U.S. schools, by school year, from the annual report “Indicators of School Crime and Safety: 2012,” by the Bureau of Justice Statistics


Children and teens need respect as well as protection

Of course, adults want to keep children safe. But it’s also important to respect them and their rights and to be very thoughtful before using “protection” as an excuse to limit their rights and privileges. Plus, we need to remember that with kids as with adults, one size doesn’t fit all. What’s suitable for a young child may not be suitable for a teen and, even within the same age groups, not all kids are equal when it comes to risk.

Ironically, the adult fear (often irrational) of teen use of social media is, according to author danah boyd, related to our clamping down on their freedoms in the physical world because of parental fears. In her book,  It’s Complicated, boyd suggests that “teens simply have fewer places to be together in public than they once did,” which is one of the reasons they flock to social media.

It’s also important to remember that risk is a necessary component of life and an important part of growth and learning for both children and adults. Josie Gleave’s Risk and play: A literature review examines scholarly research in this field, including many studies that suggest that risk-taking is an essential and beneficial aspect of play. She concludes, “There is evidence to suggest that many of the measures that have been taken to crate ‘safer’ play for children are neither necessary nor effective.”

Posted in Child safety | Comments Off

Google pulls scanning and ads from education apps

Google Education apps

Google Education apps

In addition to its consumer and business services, Google is also in the business of providing apps for education with apps such as Gmail, Drive, Docs, spreadsheets and YouTube for Schools. These apps, according  to the company, serve more than 30 million students, teachers and administrators.

Disclosure: I’m co-director of ConnectSafely.org, a non-profit Internet safety organization that receives financial support from Google  and other companies.

Google wrote that “Users who have chosen to show AdSense ads on their Google Sites will still have the ability to display those existing ads on their websites. However, as of October it has not been possible to edit or add new AdSense ads to existing sites or to new pages.”

Google also said that it has “permanently removed the ‘enable/disable’ toggle for ads in the Apps for Education Administrator console. This means ads in Apps for Education services are turned off and administrators no longer have the option or ability to turn ads in these services on.”

Acknowledged scanning per law-suit

As Education Week reported in March, Google acknowledged the scanning in response to a law-suit over  data mining within its education apps.

There had been questions over whether the Google scanning was in violation of the federal Family Educational Rights and Privacy Act (FERPA) law which, regulates information that can be collected from and about students.

Recently published guidance from the Office of Education is a little vague about whether it covers services like Google docs. In response to the question “Is Student Information Used in Online Educational Services Protected by FERPA?” the document answers that “It depends. Because of the diversity and variety of online educational services, there is no universal answer to this question.” The document says that “schools and districts will typically need to evaluate the use of online educational services on a case-by-case basis.”

Smart move


Posted in Child safety | Comments Off

Facebook adding more user-control of app privacy


Facebook to offer more control over what information users must reveal to apps

This post first appeared on Forbes.com

Speaking at Facebook’s F8 developers’ conference, CEO Mark Zuckerberg acknowledged that “Over the years one of the things we’ve heard over and over again is that people want more control over how they share their information, especially with apps.”  He added, “If people don’t have the tools  they need to feel comfortable using your apps than that’s bad for them and it’s bad for you. He pledged that “we need to do everything we can to put people first and give people the tools they need to sign in and trust (your) apps.” Facebook also posted details on its developers blog.

Disclosure: I’m co-director of ConnectSafely.org, a non-profit Internet safety organization that receives financial support from Facebook and other companies.

Addresses user fear

Zuckerberg addressed an issue that has plagued me ever since Facebook started allowing third party app developers to let users sign-in with Facebook. It’s always felt like a bit of a black box. When I sign into an app, I’m never sure what information I’ll share, not only with the app developer but with my friends. And there is even the risk that you could reveal information about your friends by using a third party app. “We know that some people are scared about pressing this blue (Login with Facebook) button,” said Zuckerberg. He added, “if you’re using an app that you don’t completely trust or you’re worried might spam your friends, than you’re not going to give it a lot of permissions.”

Change “line by line” what you reveal to apps

Last year Facebook separated read and publish permission so that apps can no longer require you to publish to all your friends, but the company is now including a dialog that lets you change “line by line what you share with this app,” said Zuckerberg. You could, for example, choose not to share your email address or other details or withhold other permissions. “I can sign in on my own terms,” said Zuckerberg.

Users get to control what their friends share about them

Zuckerberg also admitted that people can sometimes be surprised when friends share some of your data with an app. In the past when a friend logged into an app, that app could ask you to share your own data and data your friends had shared with you. But, going forward, “we are going to make it so now everyone has to choose to share their own data with an app themselves.”

Anonymous log-in


Sometimes when you want to try a new app, you don’t really want to create an account or sign in with your real identity so Facebook is offering a new feature called Anonymous log-in that enables you to sign into a new app without having to reveal your identity. Facebook of course does know who you are, but with the anonymous service, they won’t tell the app who you are. They do give you an anonymous identifier that enables you to use the app on various devices. Later, you have the option of signing-in under your real name.

App links

Facebook also introduced “App Links,” which is a platform that enables developers to “map your web content to your mobile content” across devices and platforms.

As this Facebook video explains, App-links make it easier for developers to allow users to link directly into their apps.

Posted in Child safety | Comments Off

A taste of Turkish Internet censorship

This column first appeared in the San Jose Mercury News

ISTANBUL, Turkey — Today’s column was supposed to be about alternatives to the broken user-name and password systems, but as I was doing my research, I hit upon an obstacle that required me to change topics.

I had planned to comment on a report about a security flaw in Samsung’s recently released Galaxy S5 phone that enabled hackers to bypass the phone’s fingerprint recognition system. According to several press reports, researchers at SR Labs had posted a video on YouTube showing how they were able to unlock the phone using a mold of a real fingerprint. But, when I clicked on a link to view the video I saw instead a message telling me that YouTube was being blocked based on “subparagraph 4 of article 8 of law number 5651.”

You see, I was working from a hotel room last week in Istanbul, where the Turkish government has blocked access to YouTube. The government had earlier blocked Twitter, but the ban was lifted by court order. A court also ordered that YouTube access be restored, but according to Reuters, the government has decided to defy that order.

Ironically, I’m in Istanbul to help organize some workshops for the United Nations’ Internet Governance Forum (IGF), which takes place here in September. The IGF is where delegates from governments, companies, nonprofit organizations and universities around the world discuss a variety of Internet policy issues, including freedom of expression. One of the workshops I’m organizing is about the tension between child protection and child rights, but given what’s happening here in Turkey and elsewhere around the world, I’m tempted to expand it from child rights to human rights.

One excuse officials here used to block Twitter and YouTube was that leaked content about possible Turkish military action against Syria was threatening national security. But people I’ve spoken with here in Istanbul tell me that Prime Minister Recep Tayyip Erdogan’s problem with Twitter and YouTube has more to do with the sites being used to expose official corruption, including postings on YouTube of audio tracks from compromising phone conversations among officials.

I’m in no position to judge what is or isn’t true about the claims and counterclaims here in Turkey, but it’s hardly the first time we’ve heard a government official using national security as an excuse to interfere with civil liberties.

What happened in Turkey has not happened in the United States, nor is it likely to happen. If our government were concerned about the security implications of a particular piece of content, it would likely attempt to block that content rather than the entire site that is hosting it. But as Edward Snowden and others have amply demonstrated, there are actions our government does take in the name of national security — including storing metadata about phone records — that some argue could have a chilling impact on speech.

And before we get too self-righteous about what’s happening in Turkey, it’s also important to recall relatively recent free speech debates in the United States. In 1996, Congress passed the Communications Decency Act, which would have completely blocked online sites deemed “harmful to minors,” had the Supreme Court not overturned those provisions. The CDA was aimed at pornography, not political speech, but there were many who worried about the slippery slope of the government banning otherwise legal speech in the name of “child protection.”

Even the 2012 debate about the Stop Online Piracy Act and the Protect IP Act raised troubling censorship issues because, at one point, they contained a clause that would have allowed officials to effectively block access to entire sites accused of illegally distributing copyrighted material. I bring this up not to rekindle the debate about those bills, which were withdrawn as a result of an organized online protest movement, but as a reminder that we in the United States also need to look inward when criticizing the actions of other countries.

As I sit in my Istanbul hotel room unable to do my work because of a government blockage, I recall the last time I had to postpone a writing project because of a government. Last fall I was working on a booklet called “A Parents Guide to Cyberbullying” and needed access to bullying statistics from a research study conducted by the U.S. Centers for Disease Control and Prevention. But when I clicked on the link to the study, I got a message telling me that the CDC’s website was offline because of the government shutdown over Congress’ refusal to agree on a spending bill. It wasn’t censorship, but it was politically motivated and to a writer trying to access government funded research, it was very annoying.

For me, not being able to watch a YouTube video was only a minor inconvenience — I can revisit the subject of fingerprint recognition sometime later. But for the Turkish people, it’s infuriating when their government shuts down an entire social media platform just because some officials are unhappy with what some people have posted there. It’s also a reminder that civil liberties are precious and should never be taken for granted, even in the United States, where freedom of speech is enshrined in our Constitution.

Posted in Child safety | Comments Off

Bullying Books Empower Students, Parents and School Personnel


Author Nancy Willard

Author Nancy Willard

Nancy Willard has been writing and speaking about cyberbullying since practically before the term was coined. But, like most cyberbullying experts, she knows that cyberbullying — for the most part — is bullying. And that — plus a lot of research, a Masters in Education and a law degree — qualify her as a bullying expert.


New book by Nancy Willard helps educators and parents deal with bullying

Willard has recently written two important books. One, which you can buy on Amazon for $40.19, is Positive Relations @ School (& Elsewhere): Legal Parameters & Positive Strategies to Address Bullying & Harassment. 

If you’re a school administrator, a counselor, a teacher or a parent leader, you owe it to yourself and your students to read this book. In it Willard focuses on what schools are doing to stop bullying and what is and isn’t working. Wearing both her educator and lawyer hats, she shares insights into bullying and looks at laws and enforcement while providing supporting resources and an “action plan.” Willard — who I often quote in my bullying articles — writes about “hurtful speech vs. free speech” and explores “disparaging speech on campus.” 

Well meaning adults can make things worse

Naturally, adults at school and home want to support kids in their care, but Willard points out that many of the most commonly used approaches, like a strict disciplinary policy are often ineffective. Pointing to research, she cites a study that found “while 87% of school staff think they have effective strategies for handling bullying, 58% of middle and 66% of high school students believe adults at school are not doing enough to stop or prevent bullying.”

A free ebook for parents

Free ebook for parents

Free ebook for parents. Download here

Willard has also written a free 26-page ebook that you can download for free from her Embrace Civility website.  The short ebook, which draws on some of the materials in her education book,  provides talks about why “the current bullying prevention approach is not working,” and gives parents advice on legal protection for their bullied child or teen.” There are also “strategies to prepare and make your case for the need for more effective intervention in the situation facing your child or teen.” There are short chapters on legal protections including “preparing and making your case,” plus practical tips to help resolve and diffuse problems. “One of the biggest mistakes the parent of a bullied child or teen can make is calling for the student(s) who are being hurtful to be ‘punished,’ wrote Willard. “Holding these students accountable and ensuring their hurtful actions are  stopped is essential. Punishment will not accomplish this.”

Anne Collier, my ConnectSafely.org co-director has more thoughts on Willard’s books along with some insight of her own in her blog post, “A positive, insightful new book for schools on bullying.”

Another free resource is A Parents Guide to Cyberbullying from ConnectSafely.org, the non-profit organization where I serve as co-director. In this 8-page guide, we focus on just the basics that parents need to know when dealing with bullying online and on mobile devices (which of course often has its roots in school).


Download free 8-page parents guide

Posted in Child safety | Comments Off

Facebook’s ‘Nearby Friends’ feature: What you need to know


New feature shows when friends are nearby (and you near them)

This post is adapted from one that first appeared on Forbes.com

Facebook is rolling out a new feature for its mobile app that allows you to  share your approximate location with friends. The opt-in feature (it’s turned off by default) enables you to find and be found by nearby friends. The feature can be turned on or off at any time and both parties have to have the feature enabled. When you configure the feature you can select to share with all your friends or a sub-set like only family or only close friends.

The feature is only available for users over 18 so, unless they lie about their age, it is not available to minors.

Listen to Larry Magid’s 1-minute CBS News segment on Nearby Friends

With this feature enabled, you might be able to know that a friend is nearby so you can meet up.

In addition to sharing your approximate location with a group of friends, you have the option of sharing yourprecise location with specific friends and you can decide how long your specific location will be shared. For example, I could decide to share my exact location with Susie Smith between now and 11:00, assuming Susie also had the feature turned on. I would only know her location if she shared it with me.

If friends are traveling you will able able to see the city and neighborhood that they’re in, according to a Facebook blog post.

Friends only

The maximum exposure is friends. You can not set it for friends of friends or publicly display your location. Users’ location history is permanently  set to “only me”  according to a Facebook spokesperson.


Safety and privacy  

The feature certainly can be misused to provide your location to someone who shouldn’t know it, but there are plenty of safeguards available, as long as you use them. As with any location app, you should only share your precise location with people you know and trust. Even sharing your approximate location could be inappropriate or even dangerous in some cases if people who you don’t want to find you can use it to figure out where you are. It can also have other implications. If your boss thinks you’re at an offsite meeting in Los Angeles but you’re sharing that you’re actually in San Francisco, it could have a big problem on your hands.

Not for minors under 18

This feature is for adults only. No one under 18 can use this feature, assuming that they have signed on with their correct age.

Other location sharing apps

Facebook is hardly a pioneer when it comes to location sharing. Glympse (also a ConnectSafely supporter) has long offered a location sharing app that even allows you to follow someone as they drive. By default, it times-out in 15 minutes but you can opt to share  your location for a maximum of four hours.



Disclosure: Facebook provides financial support to ConnectSafely.org

Posted in Child safety | Comments Off