Defenses One at the WheelGP0|#cd529cb2-129a-4422-a2d3-73680b0014d8;L0|#0cd529cb2-129a-4422-a2d3-73680b0014d8|Physical Security;GTSet|#8accba12-4830-47cd-9299-2b34a43444652017-02-01T05:00:00Z, Mark Tarallo<p>​Jeffrey Zients, director of the U.S. National Economic Council, has some advice for workers who are worn out by their daily car commute: simply take your hands off the wheel and turn your attention to something other than driving. “Your commute becomes restful or productive, instead of frustrating and exhausting,” Zients said at a recent press conference.</p><p>Of course, Zients’ vision assumes that the commuter is in a driverless car—or in industry parlance, a highly automated vehicle (HAV). A few months ago, the development of such driverless cars received a jumpstart from U.S. officials, who released new guidelines for operating the vehicles while promoting the government’s position that American highways will be safer when more cars are machine-driven. </p><p>“Too many people die on our roads—35,200 last year alone–with 94 percent of those the result of human error or choice. Automated vehicles have the potential to save tens of thousands of lives each year,” U.S. President Barack Obama wrote in a Pittsburgh Post-Gazette op-ed article about the new guidelines. “And right now, for too many senior citizens and Americans with disabilities, driving isn’t an option. Automated vehicles could change their lives.”</p><p>Global consulting firm McKinsey & Company has predicted that consumers will begin to adopt driverless cars starting in 2020—and that their popularity will overtake conventional cars by 2050. </p><p>But not everyone shares the government’s rosy view about these developments. In some quarters, security and safety concerns about driverless cars abound. Those that are concerned argue that the cars themselves, and the roads they will drive on, will both be too vulnerable once automated vehicles become more common.  </p><p>Despite these concerns, industry is speeding forward, and carmakers are vying to enter the driverless car market first. Tesla has already sold tens of thousands of cars with a self-driving feature known as Autopilot. The company says it aims to be the first to put a fully driverless car on the road, although it hasn’t set a specific date. </p><p>Both the Ford Motor Company and Nissan have said they plan to release driverless car models within the next five years. Driverless taxis may come sooner. General Motors Company, working with taxi service startup company Lyft, said it plans to start testing a fleet of driverless taxis soon. </p><p>Internationally, the NuTonomy company has said it will provide self-driving taxi services in Singapore by 2018, and expand to 10 cities around the world by 2020. And Nissan expects to release a feature called SuperCruise that will allow for hands-free highway driving. </p><p>Given this frenzy of market activity, Zients and U.S. Secretary of Transportation Anthony Foxx released the new U.S. federal guidelines at a press conference last September. The guidelines represent best practice guidance rather than rulemaking, and they outline the government’s expectations in terms of safety and how the new technologies should be regulated. </p><p>The guidelines are broken up into four main areas. The first part is a 15-point safety standard for the design and development of autonomous vehicles. The second part is guidance for states developing their own driverless car policies. The third consists of information on how current regulations can be applied to driverless cars. The fourth is a discussion of specific new regulatory tools and authorities that transportation officials believe might be needed for proper development of driverless cars.</p><p>The safety standards address questions such as: How will driverless cars react if their technology fails? How will occupants be protected in crashes? What measures should be put in place to preserve passenger privacy? </p><p>Also included is guidance on how automakers should approach cybersecurity issues in driverless vehicles. U.S. federal officials encourage carmakers “to design their HAV systems following established best practices for cyber physical vehicle systems.” The guidance calls on manufacturers to use best practice principles published by U.S. agencies and organizations, such as the National Institute for Standards and Technology, the Alliance of Automobile Manufacturers, and the Automotive Information Sharing and Analysis Center.</p><p>“The identification, protection, detection, response, and recovery functions should be used to enable risk management decisions, address risks and threats, and enable quick response to and learning from cybersecurity events,” the guidance reads. </p><p>The guidance adds, however, that “this is an evolving area and more research is necessary before proposing a regulatory standard.” And the view that more research is necessary is shared by many, including those who argue that driverless cars have a long way to go before security and safety concerns are satisfied. </p><p>Cybersecurity is the biggest concern for companies now evaluating risk in the developing driverless car industry, according to a recent survey conducted by Munich Re, the German reinsurance and risk management firm. </p><p>In the study, 55 percent of corporate risk managers surveyed named cybersecurity as their top concern regarding driverless cars. In the cybersecurity category, respondents said they believed the greatest threats were auto theft by an unknown individual hacking into and overtaking vehicle data systems (42 percent) and the failure of smart road infrastructure technologies (36 percent).</p><p>Researchers have demonstrated how a hacker can remotely take over the brakes, engine, or other components of a standard car. The attack surface for a driverless car is even larger, experts say, because it contains extra computers, sensors, and more extensive Internet connectivity.</p><p>There are also security and safety concerns regarding the roads that the driverless cars will travel on, says Howard Jennings, managing director of Mobility Lab, a transportation research firm. </p><p>Early testing shows that driverless cars will be able to drive with less space between them compared with conventional cars. But in areas that are meant to be village-type developments with many pedestrians, and with densely packed cars driving down narrow streets, this feature could create safety hazards. “We could have an unintended consequence here,” Jennings says.</p><p>Related to this issue is what some call the Waze effect, named after the community-based traffic application. When suggesting alternate routes, the Waze app sometimes sends many drivers down the same small street, causing logjams on narrow roads. Driverless cars could wind up doing this as well.</p><p>Finally, Jennings says that people make transportation choices based partially on the “hassle factor”—they take public transportation downtown because they think parking will be a problem, for example. If driverless cars make car commuting less stressful and take the hassle out of parking, many people may choose them over public transportation. This could put a huge unanticipated strain on road networks, causing infrastructure safety issues due to overuse.   </p><p>Finally, some fear that these significant concerns are not being addressed quickly enough, given that driverless cars for consumer purchase may be only a few years down the road.  </p><p>“It’s no longer a matter of if, but rather when the time will come for the widescale adoption of automated vehicles,” said Munich Re President and CEO Tony Kuczinski when the survey was issued. “The timeline for adoption may be sooner than many realize.” ​ </p>

Defenses One at the Wheel Review: Secrets Passwords Hunt for Talent Integrated Defense Are Third Parties Posing a Risk to Your Network? Review: Cyber Security Trends Going Dark: A Conversation with the FBI Review: Protecting Critical Infrastructures from Cyber Attack and Cyber Warfare Cyber Incident Survival Guide's Note: Hacked! OPM Aftermath Review: Beyond Cybersecurity Invalidates users' Passwords in Response to 2012 Data Breach from Hacks Pulls the Plug Review: Information Assurance to Protect PII Unsafe Harbor

 You May Also Like... Audit Secrets<p>​</p><p>PROPERLY DESIGNED and executed audits of an organization’s security program provide added value to the enterprise by enhancing effectiveness and ensuring compliance with corporate goals and professional responsibilities. Audits are commonly used to find flaws, weaknesses, and areas of concern. However, audits can also be used to optimize improvements and to identify best practices and exceptional performers within the team. This article discusses a Six Sigma approach to auditing a security program and provides guidance on developing an internal audit capacity.</p><p>Six Sigma<br>Developed by Motorola in the 1980s, the Six Sigma business management strategy focuses on finding the cause of errors or defects in a manufacturing process and taking steps to remove those causes through quality control. It requires an organization to spend significant time gathering data and using statistical analysis. While this methodology is great for manufacturing and other areas of business where minutiae can make a difference in the results, it’s not the right fit for security operations. As a result, implementation of a total Six Sigma strategy for a security program is not the best choice. However, using a simplified version of one of Six Sigma’s two project methodologies—to be combined later with its audit tools— can generate outstanding results.</p><p>DMAIC. DMAIC—which stands for define, measure, analyze, improve, and control—is used for improving processes that already exist. The DMAIC process can be adapted to security program auditing through the application of a qualitative, as opposed to a quantitative, approach. Used qualitatively, it can provide a structure for creating effective security program audits.</p><p>The “define” phase can be used to determine what is being audited and can serve as the mission statement of the audit. “Measure” allows a comparison of internal and external requirements to standards of practice. Disparities can be qualified during the “analyze” phase to determine root causes. Prevention and problem solving theories are proposed during the “improve” phase. Finally, “control” is used to reassert managerial responsibility for solving problems and optimizing the program.</p><p>To illustrate how this would work, let’s use the example of one large city government in the southeastern United States that used DMAIC to address its local problem with violent crime. After conventional methods such as adding police officers, patrols, and overtime shifts failed, the police department decided to apply DMAIC.</p><p>The mission was defined as assessing the level, type, and causes of crime. Next was the measure phase. Data was gathered on where crime was occurring most frequently, as well as how it was identified and measured. For example, the department looked at how crimes were categorized and defined. Statistical analysis determined that certain crimes had a common nexus. For example, nonviolent gang activity was closely correlated with shootings. That nexus then became the focus of a future prevention strategy. If gang activities—thefts, loitering, graffiti—were spotted, extra patrols were dispatched in anticipation of possible gunplay.</p><p>By using ongoing analyses of crime reports, citizen complaints, and developing trends, the department kept officers updated about crime hot spots. The city was able to deploy fewer officers by placing them in the right locations. As the program gained success, specialized units were dedicated to fugitive apprehension, violent crime reduction, and felony response. After the program was expanded across the city, crime dropped 28 percent in two years.</p><p>SIPOC. Another Six Sigma tool that can be effective for auditing performance within a security program is SIPOC, which stands for suppliers, inputs, processes, outputs, and customers. It provides a better fit for auditing service-based programs such as training and uniformed guard services.</p><p>Analysis of suppliers should be based on needs specific to a certain business. For example, a company that plans to contract with a service provider to get armed security officers should first assess exactly what certifications, qualifications, and training methods are best practices for armed security personnel. It can then assess whether the suppliers bidding on the project meet those standards.</p><p>Inputs are the training and policy requirements for employees. Organizational culture, leadership, vision, and management can be examined under “processes.” Outputs are what staff or systems achieve, such as policy compliance and program effectiveness. Outputs, to the extent that they are quantifiable, can be used to calculate a program’s ROI. Finally, evaluating customers provides the opportunity to determine the satisfaction of internal and external stakeholders.</p><p>Let’s look at how this SIPOC was applied by a large guard services company with a high-profile government contract. The company had a problem retaining qualified staff despite high regional unemployment and superior pay and benefits. This turnover affected client relationships as well as the bottom line. The company had no problem attracting new qualified applicants, despite mandatory physical fitness and weapons qualifications requirements. However, there were significant new-hire costs due to psychological and medical screenings, background investigations, and preemployment training. The increased turnover also affected client satisfaction because the constant turnover resulted in a loss of confidence. </p><p>The company wanted to assess the cause of the high turnover and to reduce it. Through the application of SIPOC, the external supply of employees was not difficult to determine. A survey of employees further revealed satisfaction with the pay and benefits. An examination of inputs—in the form of daily duties—and processes—represented by supervisory expectations—revealed the problem: Officers made a variety of comments regarding mandatory overtime and long hours affecting their job readiness and their ability to meet ongoing mandatory fitness requirements.</p><p>The cost of supporting the overtime reduced company profits. However, the chronic shortfall of personnel forced the company to routinely violate its own policy against excessive overtime.</p><p>The company needed a way to reduce the overtime burden. It reached out to local law enforcement and military personnel and expanded its use of part-time personnel to reduce overtime as well as provide surge capability without requiring additional hours from guards. This one change reduced turnover, reduced job demands, decreased overtime costs, enhanced guard force morale, and increased client satisfaction.</p><p>While the use of DMAIC or SIPOC as a standalone tool may be effective, combining either of these paradigms with a well-planned audit process is the best way to enhance the effectiveness of a security program.</p><p>Auditing<br>Audits have different forms and all of these forms have certain components.</p><p>Forms. There are three generally accepted forms of auditing: the attribute audit, the performance audit, and the assurance audit.</p><p>Attribute. Attribute audits are the most common audits conducted. They are generally used to test the effectiveness of controls and determine the rate of compliance with established criteria. The results of these audits provide a statistical basis for the auditor to conclude whether controls are functioning as intended, reflecting either compliance or noncompliance, resulting in a “yes or no” result.</p><p>Performance. The performance audit evaluates organizational activities such as fire drills or specific requirements such as whether document-control mandates are being followed. Unlike an attribute audit that typically results in a simple yes-or-no result, a performance audit is used to examine a program, function, or operation to assess whether the entity is achieving economy, efficiency, and effectiveness in the employment of available resources and typically ends with a measured result.</p><p>Assurance. The assurance audit contains elements of the attribute and performance audits, but its primary goal is an independent and objective evaluation of whether a defined standard is being met. While an attribute audit tells us whether a control is functioning, and a performance audit tells us if we are operating efficiently, an assurance audit determines not only whether an organization’s systems are in place and being followed but also whether improvements are needed and whether legal and regulatory requirements are being satisfied. A successful assurance audit concentrates on the present and future needs of the organization.</p><p>Components. Regardless of the type of audit, it must include certain components, such as a charter, an audit team, documents, metrics, testing, and postaudit evaluation.</p><p>Charter. The audit charter is critical for audit success. It defines the key stakeholders, personnel needs, supporting documentation, and other requirements, as well as the scope of the audit and its expected critical outcomes. The audit charter also spells out what performance metrics will be used and what the audit schedule will be. </p><p>Team. Successful audit projects require the assistance of a variety of key organizational stakeholders. While having an accomplished security professional as the head of the audit team is important, an audit sponsor is critical to providing support to the audit team and making critical introductions to cross-functional personnel.</p><p>Depending on the scope of the audit program, the team may need to function under decentralized leadership while still providing a collaborative final product. There should be unlimited access to necessary information. Any conflicts should be resolved in this initial phase.</p><p>Documents. The team needs an audit document. The audit document serves as both a checklist and a guide to prevent differences between individual people from skewing the data gathered from the audit. It must spell out the purpose, procedure, data to be gathered, and conclusions drawn in a consistent manner that is valid and repeatable.</p><p>Managers should consider requiring photographs as supporting documentation to illustrate such issues as whether an area is properly maintained or meets appearance or life-safety standards. Other documentation might include supplementary checklists specific to certain issues or areas, specific interview questions, and copies of performance evaluations.</p><p>Metrics. The development of consistent metrics is crucial for making comparisons across an organization. Using policy as a baseline can allow the development of minimum performance standards. Any number of systems can be used, but an audit must contain objective information. Subjective opinions or impressions may be allowed in the final report, but a properly designed audit will provide consistent and repeatable results.</p><p>Test. The final component of an audit is a test of the audit process. This should be done on a micro scale by each member of the audit team. This evaluation phase will ensure consistency in both the auditor’s conduct and in the reported results. Feedback from this process can be used to assess whether the right data are being collected and to refine how the findings are being notated on the audit documents. Once these results are approved by the audit sponsor, the audit team can commence operations.</p><p>Postaudit. Once the audit has been completed, issues of noncompliance must be analyzed to see whether there are any commonalities. For example, personnel in different functions and with different managers who are not in compliance with a specific directive may have been trained by the same person. Similarly, a single group’s lax attitude towards a specific policy may be a reflection of weak leadership. In each case, the objective data from the audit, along with suspected causes, should be documented in the final report, including recommendations for corrective action.</p><p>Final steps. Once the audit has been completed and the report written, two final steps should be taken by the security manager. All of the deficiencies identified in the audit should be corrected, meaning that areas out of compliance should be brought into compliance and ineffective programs should be revised.</p><p>The second step is to distribute audit results to senior management and the supervisory teams of the departments or groups that were audited. Detailed briefs on findings as well as recommended courses of action for improvement should be offered. Both praise and criticism should be carefully delivered when conducting these briefs. There should also be a plan for enhancing understanding of program policies and goals.</p><p>The audit process should be used as a tool to monitor both progress and change within the overall security program. Once policy has been rewritten, training has been conducted, and operations resumed, the cycle can start over again.</p><p>M. David West, CPP, is a Lean/Six Sigma Black Belt and has developed enterprise-wide security policies for numerous government and commercial organizations. He is a member of the ASIS International Leadership and Management Practices Council. Devin G. Reynolds, CPP, provides training and consulting services to security organizations, law enforcement agencies, and U.S. government contractors.<br></p>GP0|#28ae3eb9-d865-484b-ac9f-3dfacb4ce997;L0|#028ae3eb9-d865-484b-ac9f-3dfacb4ce997|Strategic Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465 to Protect PII<p>​<span style="line-height:1.5em;">If you are an employee, a student, a patient, or a client, your personally identifiable information (PII) is out there—and prime for hacking. In October, the U.S. Government Accountability Office (GAO) added protecting the privacy of PII to its list of high-risk issues affecting organizations across the country. All organizations, from large federal agencies to universities, hospitals, and small businesses, store PII about their employees, clients, members, or contracto</span><span style="line-height:1.5em;">rs. And, as seen in recent large-scale cyberattacks, PII is a hot commodity for malicious attackers. </span></p><p>According to the U.S. Office of Management and Budget, PII is any information that can be used alone or with other sources to uniquely identify, contact, or locate an individual. However, the definition of PII can depend on the context in which the information is used, according to Angel Hueca, information systems security officer with IT consulting company VariQ. For example, a name by itself is innocuous, but that name combined with a personal e-mail address, a Social Security number, or an online screenname or alias could give bad actors all they need to wreak havoc on a person or company.</p><p>And it appears that no one is immune to the risk of compromised PII. According to research by the GAO, 87 percent of Americans can be uniquely identified using only three common types of information: gender, date of birth, and ZIP code. </p><p>If PII is leaked, the consequences for both affected individuals and organizations can be damaging, says Hueca. Companies may face large fines or legal action if the PII they hold is breached, especially if the organization didn’t comply with outlined customer agreements or federal regulations, or if the breach violates the Health Insurance Portability and Accountability Act. A breach can also be reputation-damaging and cost the company employees and clients, Hueca notes. </p><p>Hueca stresses the importance of educating all employees, regardless of whether they have access to the company’s PII, about cybersecurity awareness and online behavior. Even using a personal e-mail at work or posting an image of their workspace on their social media account could lead to the leak of PII—there may be confidential information inadvertently documented in the photo, Hueca points out.</p><p>A more common occurrence is someone with access to an organization’s PII database inadvertently forwarding an e-mail with sensitive information, such as a client’s case number or an employee’s personal contact information. For example, in 2014, a Goldman Sachs contractor accidentally sent an e-mail with confidential brokerage account information to a Google e-mail address instead of to the contractor’s personal e-mail. Goldman Sachs went to the New York State Supreme Court to ask Google to block the recipient from accessing the e-mail to prevent a “needless and massive” data breach. The court didn’t rule on the case, because Google voluntarily blocked access to the e-mail.</p><p>Hueca says that segregating duties and tightly controlling who has access to certain information can help with this issue. Often, HR or administrative employees may need access to some PII, but not all of it—isolating potentially sensitive information can prevent harmful leaks. </p><p>How an organization’s network is set up can help prevent the accidental or malicious transfer of PII. Hueca suggests keeping sensitive information segregated from the rest of the network environment—if there is a breach, hackers will have to break through a second firewall to access the information. Organizations should also take advantage of standard content tracking software to spot suspicious activity.</p><p>“Fortunately, many organizations have something called content filtering, which are tools that are able to filter information as it comes in and out of the organization,” Hueca explains. “If there’s something that looks like a Social Security number, with nine digits, being sent out, the tool will alert an administrator that this activity is happening, which could be accidental or malicious.” </p><p>The U.S. Department of Homeland Security’s (DHS) handbook for safeguarding PII says only secure, organization-issued electronic devices should be used to view sensitive information. If an employee must access PII on a portable device, such as a laptop, USB, or external hard drive, the data should be encrypted. And if PII must be e-mailed within the office, DHS strongly recommends password-protecting the data within the e-mail. </p><p>Lastly, Hueca recommends that all companies have an incident response plan in place specifically for the malicious theft of PII. </p><p>“This is something that most organizations don’t think about, having an incident response plan specifically for a PII breach,” Hueca says. “What happens if you do get breached? What are the steps? Talk about what-ifs. Once you have a notification in place, you get alerted, what do you do? Try to segregate it from other sensitive data and figure out what happened.” </p>GP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a4344465 Utility of Securing the Electric Supply<p>​</p><p>OUR SOCIETY IS BUILT in part on a foundation of reliable electrical energy. Utilities work to ensure the uninterrupted supply of electricity in the face of multiple threats, including copper thieves, marijuana growers, computer hackers, and potential terrorists. The experiences of three utilities, plus a look at how the industry as a whole is trying to improve information sharing, serve to illustrate the challenges this sector faces and the varied solutions that are helping to minimize the risks.</p><p>Copper Theft<br>EPCOR Utilities Inc., a power and water provider owned by the City of Edmonton, Alberta, Canada, owns or operates 50 facilities in Canada and the United States. One major security problem the company faced in recent years was the theft of copper.</p><p>Global economic growth over the past decade, especially in China and India, had created a high demand for industrial metals like copper, boosting their values to unprecedented levels. Understandably, this market has made copper, which is prevalent across the energy sector, a prime target for thieves.</p><p>According to a survey published in January by the Electrical Safety Foundation International (ESFI), electrical utilities sustained an estimated 50,193 thefts of copper during 2008. ESFI estimates that copper thieves hit 95 percent of electrical utilities in the United States. The copper stolen was valued at just over $20 million.</p><p>The full impact of copper theft, however, dwarfs the cost of the metal alone. Thefts in 2008 caused more than 317 days of power outages, ESFI found. Utilities also have to spend money on repairs, and when custom-ordered materials are stolen from construction projects, further activity is delayed while replacement equipment is ordered and manufactured. Thus, the total impact of that year’s thefts is estimated at more than $60 million, while utilities spent another $27 million trying to prevent future copper theft incidents.</p><p>Beyond these costs are the dangers that such thefts pose to thieves and utility workers alike. Most people think of electricity in the context of home wall outlets.</p><p>That amount of energy is relatively small, and safety is carefully engineered into delivery from the substation to the homes and appliances that use it. That same level of safety does not exist at the utility generating plant or substation. ESFI found that 52 people were injured while stealing copper last year, and 32 died. Thieves have died in substations wearing running shoes, using rubber-handled cutting tools, mistakenly thinking that they were protected, only to have massive arcs of electricity travel through the air and their bodies en route to the ground. Similarly, if a thief successfully steals a copper grounding cable, the next utility worker to service that equipment could get a fatal shock.</p><p>Thieves find copper in the form of wire in construction projects, derelict housing, distribution lines, telephone boxes, and electrical substations, among many other sources. Those committing the crimes run the gamut from desperate drug addicts to members of organized crime syndicates.</p><p>The common thread is opportunism. If would-be thieves don’t see copper or don’t think they can access it easily, they won’t even try. Thus, experience has shown that the best way of reducing the theft of copper is to reduce ease of access to it. </p><p>Realizing that a company’s technical and construction personnel are best positioned to limit exposure and given the clear nexus with worker safety, EPCOR Utilities addressed the problem by educating staff through its existing program of Safe Work Practices. It turned out that many workers were unaware of both the risks posed by copper thefts and how easily they could help to mitigate them. </p><p>Construction crews now clean up all scrap copper at the end of the day, and unused copper wire and grounding equipment must be either returned to service centers at night or securely locked away. </p><p>Other solutions have been improvised by workers in the field. When, for example, they are burying copper cable, crews make sure that they finish a given segment before heading home for the day; they don’t start segments they can’t finish that day so that equipment will not have to be left out overnight, which would be an invitation to thieves. </p><p>Another simple method of thwarting copper thieves is wire tagging, which essentially entails “branding” copper with a sign of ownership. It works on three fronts: it’s a deterrent to thieves, it can help authorities track down perpetrators, and it can alert legitimate scrap vendors to stolen materials. </p><p>Utilities typically set up scrap disposal contracts with an approved recycler; that company should be the only vendor handling that utility’s copper scrap. If a legitimate recycler spots a company’s tags on scrap offered by a third party, the recycler won’t buy it, discouraging future theft. </p><p>EPCOR Utilities uses two products: DataDot, which is an adhesive material containing sand-sized particles bearing a registered company PIN number, and DataTraceDNA, also developed by DataDot Technology Ltd. along with Australia’s state-run Commonwealth Scientific and Industrial Research Organization. DataTraceDNA is battleship gray-colored paint containing a signature ceramic taggant identifying the owner.</p><p>Stamping copper components with the name of the electrical utility that purchases them is another excellent method of marking copper. Grounding stakes, copper fittings, and wire can all be stamped. Another tactic is use of alternative conductors such as Copper Clad Steel, produced by Copperweld. The cable’s conductive copper binding constitutes only 3 percent of its diameter, leaving a thief with minimal resale value.</p><p>These measures, part of a broader, companywide security program, helped reduce overall shrinkage at EPCOR by two thirds from 2007 to 2008. Copper thefts—one of which cost the company $20,000 in metal alone—were all but eliminated in 2008, with only four minor thefts reported.</p><p>Copper’s market price peaked at $4 per pound in 2006, but it fell to $1.50 per pound in early 2009, and the rate of theft has fallen somewhat with it. This is not the end of the problem, though. Utilities know that when the global economy improves, copper theft will increase again.</p><p>Electricity Theft<br>BC Hydro and Power Authority is a provincially owned utility in Canada; it produces power for domestic use and export and manages small water supply operations in remote communities within British Columbia. For BC Hydro, a utility serving 94 percent of British Columbia’s population areas, the problem is electricity theft and the associated damage caused by it, which are estimated to cost the company $30 million annually. That figure is expected to rise to $60 million within a decade if left unchecked.</p><p>In British Columbia, 99 percent of energy theft is linked to illegal indoor marijuana cultivation operations, which require powerful lamp light 24 hours a day. Criminals tap into distribution circuits in various ways to bypass the electric meter. Some of their methods are quite sophisticated, and all are extremely dangerous. Beyond the obvious risk of electrocution to both perpetrators and utility workers, diversions can result in unstable circuits that can lead to house fires, explosions, and power surges across the circuit affecting all homes in the community.</p><p>Besides obvious physical tampering with a meter that would appear clearly to a company technician, the most telling indicators of diversion are a sudden drop in metered consumption and a sudden increase in actual power draw. To uncover these indicators, BC Hydro special investigation teams search for anomalies in the electric consumption records of customer premises and conduct field tests on distribution circuits, distribution feeds, and at the electrical meters.</p><p>Any diversion confirmed by BC Hydro is reported to law enforcement. While statistically, energy diversion can establish suspicion of marijuana cultivation, the decision of whether to investigate or pursue narcotics charges falls solely to police. And in Canada the utility’s lost rate fees are solely a civil matter except where restitution associated with a successful theft conviction is ordered by the courts. It falls to each utility to collect from the energy thief, and the matter is often settled before a civil court judge.</p><p>In the United States, the process is only slightly different, according to Scott Burns, a former criminal prosecutor and now executive director of the National District Attorneys Association. Nearly all U.S. states have criminal theft-of-service statutes, with penalties mirroring those for physical theft. The utilities are expected to report energy diversions to police. Then, as in Canada, it falls to police to decide whether to simply pursue theft charges or investigate possible drug cultivation.</p><p>Not all pot growers steal power. But most of them exact an exceptionally high draw on the grid, which presents critical safety concerns within a building. Thus, an amendment to British Columbia’s Safety Standards Act allows municipalities to request information regarding high consumption users without violating privacy.</p><p>High consumption is specifically defined in the law as consumption over 93 kilowatt-hours per day, compared to about 30 kilowatt-hours daily for a normal household. Records are provided to municipalities on written request from a designated public safety official, such as a fire marshal, to ensure that high consumption does not present a life-safety danger.</p><p>Cybersecurity<br>Manitoba Hydro, also a provincially owned power utility in Canada, generates and transmits power to Manitoba and the United States. Like other utilities, it was concerned about the cybersecurity of industrial control systems (ICS), including the supervisory control and data acquisition (SCADA) software used by utilities.</p><p>The vulnerability of these systems has gained attention in recent years as media reports have highlighted the potential threat posed by hackers breaking into these systems and remotely controlling or sabotaging the electric grid. An anonymously sourced article earlier this year in the Wall Street Journal, for example, reported that Chinese and Russian spies had both penetrated the North American electric grid and left behind bot-like programs that could possibly be activated at a later date to cripple the North American electricity infrastructure.</p><p>The report elicited the widest possible range of responses from network security experts. Some cast the report as an accurate and overdue public wake-up call for the utility sector. Others brushed off the report as a cynical bid from within the U.S. government to advance a policy agenda.</p><p>Utility security professionals who are disciplined about risk know that the greatest threat of cyberattack comes not from overseas or from a radicalized hacker but from within. Consider, for example, that in 2000, an Australian engineer quit his job with a contractor hired to install a SCADA system in a sewage treatment plant. When the utility did not hire him as an independent contractor, he accessed the SCADA system himself and dumped more than 200,000 gallons of raw sewage into area rivers, parks, and onto the grounds of a local hotel.</p><p>More recently, the U.S. government charged that a former IT contractor for California-based Pacific Energy Resources, Ltd., remotely disabled network systems the company used to alert them to leaks at off-shore oil rigs.</p><p>Addressing the threat, inside and out, requires a comprehensive, converged enterprise security plan with sound fundamentals, including strong procedures for ensuring personnel security and multiple factors of network access control that change regularly to prevent access by former employees or vendors.</p><p>Manitoba Hydro handles personnel risk assessment using a methodology established by the HR Policy Association that considers the nature of a worker’s position, the gravity of prior offenses, and the length of time since they occurred. While the company is already using this approach to assess new hires, assessments on longstanding employees are the subject of negotiations with unions.</p><p>With regard to network access control, the company recognizes that solid IT security requires regular training and awareness programs, along with use of passwords, tokens, and remote access authentication and encryption.</p><p>The electric utility sector as a whole is taking a major step toward bolstering both general and cybersecurity with a suite of nine critical infrastructure protection standards. Issued in 2005 by the sector’s self-regulation entity, the North American Electric Reliability Corporation (NERC), the standards address real or suspected sabotage, critical cyber-asset identification, security management controls, personnel and training, electronic security perimeters, physical security of critical cyber assets, systems security management, incident reporting and response planning, and recovery plans.</p><p>Implementation of the first standard applying to cybersecurity—critical cyberas set identification—generated an April memo from NERC Chief Security Officer Michael Assante, who indicated that utilities might require a more robust consideration of which assets are critical by first assuming that all assets are. NERC asked that member utilities “take a fresh, comprehensive look at their risk-based methodology and their resulting list of [assets] with abroader perspective on the potential consequences to the entire interconnected system of not only the loss of assets that they own or control but also the potential misuse of those assets by intelligent threat actors.”</p><p>Assante’s letter implied that in initial assessments, the utility sector designated far fewer assets “critical” than NERC thought it should have. Testifying recently before Congress with Assante, Stephen T. Naumann of energy company Exelon Corp. assured lawmakers that “as owners, operators, and users of the bulk power system, electric utilities take cybersecurity very seriously.”</p><p>The first NERC standards were scheduled to become enforceable in July, with fines for noncompliance of up to $1 million a day, but the Federal Energy Regulatory Commission, which formally regulates the power sector, has urged industry compliance by the end of 2010, after which time it would take enforcement action.</p><p>Networking<br>A comprehensive regimen of information sharing between utilities and government agencies is a critical component of security. While communications occur today on an unprecedented scale, they are still not completely open and collaborative.</p><p>Countries like the United States have created regulatory agencies that seek to ensure the reliability of the bulk electric system and, as a prerequisite, the security of that system. The Department of Energy sets policy, the Federal Energy Regulatory Commission regulates U.S. utilities and the Department of Homeland Security (DHS) steers security policy, coordinated in part through NERC, which serves as the sector’s official information-sharing and analysis center.</p><p>Canada, by comparison, lacks a central regulatory agency for its electricity sector. Natural Resources Canada regulates environmental impacts, while provincial utility commissions represent consumers. Public Safety Canada administers national security and federal emergency management programs. But none of these agencies has jurisdiction over the publicly and privately owned electrical utilities across Canada. Like their American counterparts, major Canadian utilities—but not all of them—are affiliated with NERC through regional reliability coordinating councils.  The 32-member Canadian Electrical Association (CEA), the country’s private industry organization, has become the de facto voice for sector information sharing. Utilities security is addressed specifically by the CEA Security and Infrastructure Protection Committee (SIPC).</p><p>SIPC meets three times a year, and meetings feature closed-door “pens-down,” or off-the-record, sessions in which relevant experiences and concerns related to critical infrastructure protection can be discussed without fear of public disclosure. Several years ago, the committee agreed to include representatives from the Royal Canadian Mounted Police (RCMP) in a meeting. The first meeting demonstrated a need and desire for information and intelligence sharing and spawned a new level of participation and cooperation. Today, several federal government agencies join in these meetings to facilitate public-private information-sharing efforts and to provide classified briefings.</p><p>The RCMP, Public Safety Canada, and the Canadian Cyber Incident Response Centre were invited to subsequent SIPC meetings and their participation continues. Reciprocally, the RCMP has provided security clearances to CEA members who now participate in twice-yearly classified energy sector briefings.</p><p>The three government agencies are all partners in the national Integrated Threat Assessment Centre (ITAC). Sector members with secret-level clearance receive ITAC’s relevant intelligence products and participate in secret briefings in Ottawa. Most important, new trusted relationships between government and utility personnel have resulted in ongoing communication about threats and vulnerabilities. </p><p>Before these types of exchanges, sector-specific concerns like copper theft were relatively unknown to national officials from the RCMP. Similarly, many utility-sector representatives were unfamiliar with the threat posed by extremist environmental groups like the Earth Liberation Front. Collaboration has brought a new sense of understanding and cooperation to public and private participants.</p><p>CEA representatives recently attended a NERC cybersecurity meeting in Phoenix, Arizona, during which American counterparts shared their desire for more trusted person-to-person relationships with their federal agencies like the FBI and DHS. Canada’s effort has benefitted in part from its scale, with utilities and government serving a population roughly one-tenth that of the United States.</p><p>Canada’s information-sharing effort is not perfect. It is difficult to reach all critical infrastructure owner operators when they are not compelled to participate in information sharing. But the CEA’s SIPC model is providing an excellent conduit for information sharing in a way that is gaining momentum and trust.</p><p>A more formal information-sharing environment, such as the CEA established within Canada, could serve as a model for any country’s critical infrastructure sector. The end result would be better preparedness and better response capabilities, to the mutual benefit of all parties.  </p><p>Ross Johnson CPP, BMASc (Bachelor of Military Arts and Science), is senior manager, security and contingency planning for Capital Power Corporation in Edmonton, Alberta, Canada, and is a member of the ASIS Oil, Gas, and Chemical Industry Security Council. </p><p>Chris McColm, CPP, CFI (Certified Forensic Investigator), is corporate security manager for Manitoba Hydro and Gas in Winnipeg, Manitoba, Canada, and a member of the ASIS Utility Security Council. </p><p>Doug Powell, CPP, PSP, is manager, corporate security for BC Hydro and Power Authority, headquartered in Vancouver, British Columbia, Canada.<br></p>GP0|#3795b40d-c591-4b06-959c-9e277b38585e;L0|#03795b40d-c591-4b06-959c-9e277b38585e|Security by Industry;GTSet|#8accba12-4830-47cd-9299-2b34a4344465