Currently Browsing: Security

My new IT hero: “…detonate…”

Nuff said.

In case you missed it: PA school district spys on students via webcams

From Info World and Network World come great pieces on this must-read tale about schools spying on students.  It stems from an unbelievable story from the Lower Merion School District near Philadelphia.  See if this summary gets you to read more.

School gives kids laptops with web-cams.

School doesn’t tell kids or parents they are monitoring the video from these web-cams.

School administrator sees kid eating Mike and Ike candy, doesn’t realize it’s candy, thinks the kid is popping pills.

Administrator brings kid into office to confront kid on what the administrator believes to be illegal drug use.

Massive lawsuit ensues.

Lower Merion schools used to be primarily known for being where Kobe Bryant played scholastic ball.  Not anymore.

Google’s Official Response to this week’s Postini Spam-Filtering Problems

From: Google Enterprise Support <>
Subject: Postini Services Incident Update
Date: Fri, 16 Oct 2009 01:17:21 -0400 (EDT)

Google Inc.
1600 Amphitheatre Parkway
Mountain View, CA  94043

Postini Incident Report
Service Disruption – October 13, 2009
Prepared for Postini Services Customers

Dear Postini Customer,

The following is the incident report for the issues with mail delivery and Administration Console access
that some Postini customers experienced on October 13, 2009. We understand that this service
disruption has affected our valued customers and their users, and we sincerely apologize for the impact.

Issue Summary
Beginning at approximately 10:25 PM PDT, Monday October 12 | 5:25 GMT, Tuesday October 13,
affected customers experienced severe mail delays and disruption. Also, during this time, affected
customers had intermittent access to the Administration Console, Message Center, and Search Console.
The root cause of the delivery problem was an unintended side effect of a filter update, compounded by
database issues that further slowed message processing.

Incoming messages may have been deferred; no messages were bounced from recipients or deleted. In
some cases, sending servers may have stopped resending messages after a deferral and returned
delivery failure notifications to senders. (Typically, servers are set up to retry sending for up to five days.)
During the incident, timely status information about the incident was not consistently available to
customers. We posted information on the Support Portal and from the @GoogleAtWork Twitter account;
however, customers often experienced problems accessing the portal due to load issues, and updates
were not included on the Postini Help forum. Also, the Postini status traffic lights intermittently showed a
“green light” instead of indicating the delivery delay. Customers calling in to report cases experienced
very long wait times.

Actions and Root Cause Analysis
At approximately 11:30 PM PDT, Monday October 12 | 6:30 GMT, Tuesday October 13, monitoring
systems detected severe mail flow issues and automatically directed mail flow to the secondary data
center. Upon receiving the error alerts, the Engineering team immediately began analyzing the issue and
initiated a series of actions to help alleviate the symptoms. Message processing continued to perform
poorly in the secondary data center.

Mail traffic was then directed across both the primary and secondary data centers to maximize processing
resources. During this time, Engineering temporarily disabled the Administration Console and other web
interfaces to reduce impact to the processing infrastructure. Engineering performed a set of extensive
diagnostics and tests and determined the cause to be the result of a combination of the following

• A new filter update appears to have inadvertently impacted the mail processing systems.
• Unusual malformed types of messages triggered protracted scanning behavior, and its
interaction with filter update affected mail delivery.
• A power-related hardware failure with database storage servers reduced input/output rates. The
latency in database access reduced our overall processing capacity.

The combination of these conditions resulted in high failure rates for mail processing and the deferral of
new connections from sending mail servers.

To fix the database issue, Engineering worked with the hardware vendor to replace the faulty hardware
component. At 11:00 PM PDT, October 13 | 6:00 GMT, October 14, database disk input/output
throughput returned to normal.

At 12:30 AM PDT | 7:30 GMT Wednesday October 14, the filter update was revoked, and mail processing
returned to full capability. As a precautionary measure, Engineering continued to process a portion of
traffic through both the primary and secondary data centers. Mail processing was restored to the primary
data center at 1:39 AM PDT | 8:39 GMT. Although mail processing was at normal speed and capacity,
some users may have seen delayed messages continue to arrive in their inboxes. These potential delays
occur when the initial or subsequent delivery attempt is deferred and the sending server waits up to 24
hours before resending the same message.

Corrective and Preventative Actions
The Engineering and Support teams conducted an internal review and analysis, and determined the
following actions to help address the underlying causes of the issue and help prevent recurrence:

• Implement standard procedures for reverting filter updates as a mitigation measure and to help
speed time to resolution.
• Perform an in-depth analysis of the filter update to help ensure this class of error is not
• Investigate the unusual malformed messages to quickly identify the message pattern and
thoroughly understand any impacts.
• Enable monitoring for notifications of the class of power failure that may affect the database
storage system.
• Determine whether the database storage servers can be configured to maintain the throughput
level during reduced power situations.
• To improve communications during incidents, we will:
◦ Post timely status updates to the Postini Help forum for better visibility.
◦ Accelerate the work to monitor and communicate the Postini services status on the
Apps Status Dashboard. The dashboard offers a single location for the latest service
status and options for RSS feeds. This will replace the traffic lights system and provide
more accurate and in-depth information.
◦ Moving forward, update the phone status message more quickly to inform customers
during an incident.
◦ Expand phone support capacity to handle spikes in call volume. This capacity is
expected to be available within the next several weeks.
◦ Update the maintenance pages with up-to-date information that are displayed when the
Administration Console is unavailable.

Over the next several weeks, we are committed to implementing these improvements to the Postini
message security service. We understand that system issues are inconvenient and frustrating for
customers. One of Google’s core values is to focus on the user, and we are committed to continually and
quickly improving our technology and operational processes to help prevent and respond to any service

We appreciate your patience and again apologize for the impact to your organization. Thank you for your
business and continued support.

The Postini Services Team

This Week’s Favorite Links – June 7, 2009

Information Week: Anti-U.S. Hackers Infiltrate Army Servers

We got into the nation’s cyber war capabilities and challenges on the radio last Thursday.  The story about Turkey-based (basted? lol) hackers M0sted infiltrating US Army web servers very much stuck out in my mind.  Not because hacking into a web server is that unique, or even the military element of it.

Most interesting to me was the very common method used to carry out the attack, namely SQL injection.  As described in a comment by InfoWeek user DigitalGrimm on the article linked in our post here:

These ‘hacks’ are easy enough for any person worth their weight to exploit and happen every days to hundreds of web sites. Most likely, judging by the described defacement, these were 90% automated attacks. Furthermore, if the web server is setup correctly (be it Linux, Windows, MAC, BSD, etc) the most the group would have access to is the web site’s database which should have nothing more then information for dynamic content. As I doubt any company would be foolish enough to actually have an externally accessible server to have access to internal only data.

Sorry, but there will be no ‘kudos’ to the ‘hackers’ on this one.

We have seen many sites fall victim to this method of attack, and that an Army-maintained site was vulnerable just emphasizes what another recent Information Week article details quite well: Cybersecurity Review Finds U.S. Networks ‘Not Secure’.

This blog is one of my favorite recent discoveries.  Their tag line is Each week we provide a handful of tips that will save you money, increase your productivity, or simply keep you sane” and it has feel similar to LifeHacker.  With posts like “Mono-Task and Work More Effectively” and “How to: Share iTunes Media With All Your Computers” how can you not like it?

Reuters via the New York Times: Facebook Sells 1.96% Stake for $200 Million

According to the story “the stake, sold to Digital Sky Technologies based in London and Moscow, values the social networking site at $10 billion” which should bother you, even if you love Facebook.

WNYT 13: Computer virus invades Rensselaer County offices

Hey – I got on the News!

Sandy Family from the Sanford Financial Group –  who we know from our association with Talk 1300 – invited me to speak at a seminar about how to protect one’s self against identity theft.  The turnout was great – about 80 people came to the Holiday Inn on Wolf Road in Albany.  My last post on ID theft was written as a reference for the event.

My luck was pretty good that night.  In addition to being fortunate to be included on a panel with the Chief of Colonie Police, a high-profile attorney and a staffer from the State Attorney General’s Office – I got on the local news too.

Beth Wurtman from local NBC affiliatt WNYT asked me to taped some remarks in the hallway during the tail end of the event. I took some ribbing at the office too.  As the TV spot identified me as a “computer expert” – our design staff felt compelled to make stickers (see pic).  Everyone at the WSG offices was wearing one of these stickers when I came in the next day.

Here’s the video:

Page 2 of 3123