Barracuda Networks today released their 2010 mid-year security report, and they’re looking askance at some big names. For one thing, the headline on Barracuda’s site reads:
Google Crowned “King of Malware” – Has Two Times More Malware than Bing, Yahoo! and Twitter Combined
So, search with care, friends. We encourage you browse the report and see what they mean.
The finding are worth a read, and you can get through the report quickly. The payoff is learning things like: of every 100 Twitter users, 90 have less than 100 followers. (I’m in that top ten percent, if you’re wondering, and so is the WSG Twitter account.)
Google Voice numbers are no longer invitation only. The big G announced today that the service would be available on demand for Google ID holders. I’ve had a GV number for a while, and we were just able to beg/borrow/steal an invite for a client to use on their new site.
What’s Google Voice, you ask?
Aardvark, which we gave a positive review last Fall, was in the news today as they are reportedly going to be a member of the Google empire.
The deal, which was first reported by Techcrunch and confirmed by Mashable yesterday, is said to be for around $50 million. We’ll see how Google puts Aardvark’s functionality into play, but Mashable’s post has some good guesses.
From: Google Enterprise Support <firstname.lastname@example.org>
Subject: Postini Services Incident Update
Date: Fri, 16 Oct 2009 01:17:21 -0400 (EDT)
1600 Amphitheatre Parkway
Mountain View, CA 94043
Postini Incident Report
Service Disruption – October 13, 2009
Prepared for Postini Services Customers
Dear Postini Customer,
The following is the incident report for the issues with mail delivery and Administration Console access
that some Postini customers experienced on October 13, 2009. We understand that this service
disruption has affected our valued customers and their users, and we sincerely apologize for the impact.
Beginning at approximately 10:25 PM PDT, Monday October 12 | 5:25 GMT, Tuesday October 13,
affected customers experienced severe mail delays and disruption. Also, during this time, affected
customers had intermittent access to the Administration Console, Message Center, and Search Console.
The root cause of the delivery problem was an unintended side effect of a filter update, compounded by
database issues that further slowed message processing.
Incoming messages may have been deferred; no messages were bounced from recipients or deleted. In
some cases, sending servers may have stopped resending messages after a deferral and returned
delivery failure notifications to senders. (Typically, servers are set up to retry sending for up to five days.)
During the incident, timely status information about the incident was not consistently available to
customers. We posted information on the Support Portal and from the @GoogleAtWork Twitter account;
however, customers often experienced problems accessing the portal due to load issues, and updates
were not included on the Postini Help forum. Also, the Postini status traffic lights intermittently showed a
“green light” instead of indicating the delivery delay. Customers calling in to report cases experienced
very long wait times.
Actions and Root Cause Analysis
At approximately 11:30 PM PDT, Monday October 12 | 6:30 GMT, Tuesday October 13, monitoring
systems detected severe mail flow issues and automatically directed mail flow to the secondary data
center. Upon receiving the error alerts, the Engineering team immediately began analyzing the issue and
initiated a series of actions to help alleviate the symptoms. Message processing continued to perform
poorly in the secondary data center.
Mail traffic was then directed across both the primary and secondary data centers to maximize processing
resources. During this time, Engineering temporarily disabled the Administration Console and other web
interfaces to reduce impact to the processing infrastructure. Engineering performed a set of extensive
diagnostics and tests and determined the cause to be the result of a combination of the following
• A new filter update appears to have inadvertently impacted the mail processing systems.
• Unusual malformed types of messages triggered protracted scanning behavior, and its
interaction with filter update affected mail delivery.
• A power-related hardware failure with database storage servers reduced input/output rates. The
latency in database access reduced our overall processing capacity.
The combination of these conditions resulted in high failure rates for mail processing and the deferral of
new connections from sending mail servers.
To fix the database issue, Engineering worked with the hardware vendor to replace the faulty hardware
component. At 11:00 PM PDT, October 13 | 6:00 GMT, October 14, database disk input/output
throughput returned to normal.
At 12:30 AM PDT | 7:30 GMT Wednesday October 14, the filter update was revoked, and mail processing
returned to full capability. As a precautionary measure, Engineering continued to process a portion of
traffic through both the primary and secondary data centers. Mail processing was restored to the primary
data center at 1:39 AM PDT | 8:39 GMT. Although mail processing was at normal speed and capacity,
some users may have seen delayed messages continue to arrive in their inboxes. These potential delays
occur when the initial or subsequent delivery attempt is deferred and the sending server waits up to 24
hours before resending the same message.
Corrective and Preventative Actions
The Engineering and Support teams conducted an internal review and analysis, and determined the
following actions to help address the underlying causes of the issue and help prevent recurrence:
• Implement standard procedures for reverting filter updates as a mitigation measure and to help
speed time to resolution.
• Perform an in-depth analysis of the filter update to help ensure this class of error is not
• Investigate the unusual malformed messages to quickly identify the message pattern and
thoroughly understand any impacts.
• Enable monitoring for notifications of the class of power failure that may affect the database
• Determine whether the database storage servers can be configured to maintain the throughput
level during reduced power situations.
• To improve communications during incidents, we will:
◦ Post timely status updates to the Postini Help forum for better visibility.
◦ Accelerate the work to monitor and communicate the Postini services status on the
Apps Status Dashboard. The dashboard offers a single location for the latest service
status and options for RSS feeds. This will replace the traffic lights system and provide
more accurate and in-depth information.
◦ Moving forward, update the phone status message more quickly to inform customers
during an incident.
◦ Expand phone support capacity to handle spikes in call volume. This capacity is
expected to be available within the next several weeks.
◦ Update the maintenance pages with up-to-date information that are displayed when the
Administration Console is unavailable.
Over the next several weeks, we are committed to implementing these improvements to the Postini
message security service. We understand that system issues are inconvenient and frustrating for
customers. One of Google’s core values is to focus on the user, and we are committed to continually and
quickly improving our technology and operational processes to help prevent and respond to any service
We appreciate your patience and again apologize for the impact to your organization. Thank you for your
business and continued support.
The Postini Services Team