Strategic Security Play: Resilience & InfrastructureGP0|#28ae3eb9-d865-484b-ac9f-3dfacb4ce997;L0|#028ae3eb9-d865-484b-ac9f-3dfacb4ce997|Strategic Security;GTSet|#8accba12-4830-47cd-9299-2b34a43444652017-06-01T04:00:00Z, Megan Gates<p>​Standardization is often seen as a positive in modern society, but there are risks in creating a monoculture—a homogenous culture lacking diversity—especially in cyberspace. </p><p>In a paper published in 2003 by the Computer & Communications Industry Association, a team of researchers outlined the risks of a Microsoft monopoly on global cybersecurity. A majority of the world’s computers at the time were running Microsoft’s operating system, so they were vulnerable to the same kinds of viruses and worms.</p><p>“Because Microsoft’s near-monopoly status itself magnifies security risk, it is essential that society become less dependent on a single operating system from a single vendor if our critical infrastructure is not to be disrupted in a single blow,” the authors of CyberInsecurity: The Cost of Monopoly, said. “The goal must be to break the monoculture.”</p><p>The authors suggested that governments create policies and regulations that would require critical infrastructure operators to diversify the operating systems they were using, thereby preventing a single virus from wreaking global havoc.</p><p>In 2010, two of the authors wrote an essay explaining that their views on the monoculture threat had changed. One reason for their change in perspective was because, in 2003, they had assumed that the IT monoculture was relatively simple, but it’s not. </p><p>“Two computers might be running the same [operating system] or applications software, but they’ll be inside different networks with different firewalls and [intrusion detection systems] and router policies; they’ll have different antivirus programs and different patch levels and different configurations, and they’ll be in different parts of the Internet connected to different servers running different services,” wrote Bruce Schneier, now chief technology officer at IBM Resilient, for Information Security magazine. “That’s one of the reasons large-scale Internet worms don’t infect everyone—as well as the network’s ability to quickly develop and deploy patches, new antivirus signatures, new IPS signatures, and so on.” </p><p>The risks of a monoculture on critical infrastructure were brought to light outside of cyberspace in December 2015 when Ukraine’s electric grid was hit by a cyberattack, leaving approximately 225,000 people without power. Ukraine recovered, but was hit by another cyberattack in the fall of 2016, which again cut the power.</p><p>The electric grid in Ukraine, as in most of eastern Europe, was created when it was part of the Soviet Union. Ukraine’s system was standardized and designed to operate exactly the same way, across the board. Since Ukraine became an independent nation in 1991, it has built some diversification into its electric grid. </p><p>“But the culture, the thinking, the older system are all fairly standard across the country and look just like Russia—its adversary to the east—because it was all built on the old Soviet model,” says Marcus Sachs, CSO of the North American Electric Reliability Corporation (NERC). “That becomes a weakness when you repeat things and you don’t have diversity in thinking, and diversity in the way you run stuff.”</p><h4>The Ukraine Attack</h4><p>On December 23, 2015, three Ukrainian regional electrical distribution centers—called oblenergos—went down within 30 minutes of each other, cutting power to approximately 225,000 people. The cause of the outage: a coordinated cyberattack that was the first publicly acknowledged attack to result in a power outage.</p><p>The oblenergos were forced to use manual operations to restore power to the electric grid and restored power quickly after an initial outage of several hours. However, the impacted oblenergos continued to run their distribution systems in an “operationally constrained mode,” according to Analysis of the Cyber Attack on the Ukrainian Power Grid, issued by SANS Industrial Control Systems and the Electricity Information Sharing and Analysis Center (E-ISAC).</p><p>After restoring power, Ukraine worked with security vendors and government partners—including the U.S. Department of Homeland Security (DHS) and NERC—to investigate how the cyberattack was carried out.</p><p>They discovered that the attackers used spear phishing emails sent to administrative or IT network operators to gain access to the oblenergos’ business networks. The emails included an attachment—an Excel spreadsheet—that was embedded with BlackEnergy malware that, once opened, installed Secure Socket Shell backdoors on the oblenergos’ networks.</p><p>These backdoors allowed the attackers to gather information on the environment and enable access to other areas of the network more than six months before the December 23 attack. </p><p>“One of their first actions happened when the network was used to harvest credentials, escalate privileges, and move laterally through the environment,” the analysis says. “At this point, the adversary completed all actions to establish persistent access to the targets.”</p><p>The attackers used these stolen credentials to pivot into network segments where supervisory control and data acquisition (SCADA) dispatch workstations and network segments were located. Using these connections, the attackers learned how to interact with the oblenergos’ distribution management systems (DMSs) and developed malicious firmware to use later.</p><p>They gained access to the oblenergos’ industrial control systems (ICS) components, and installed a malicious software—called a KillDisk—across the environment. The attackers then combined their work to execute the attack, opening the oblenergos’ breakers and taking at least 27 substations offline. They also uploaded the malicious firmware they had created to prevent operators from using remote commands to bring the substations back online.</p><p>“During the same period, the attackers also leveraged a remote telephonic denial-of-service attack on the energy company’s call center with thousands of calls to ensure that impacted customers could not report outages,” the analysis says. “Initially, it seemed that this attack was to keep customers from informing the operators of how extensive the outages were; however, in review of the entirety of the evidence, it is more likely that the denial of service was executed to frustrate the customers since they could not contact customer support or gain clarity regarding the outage.”</p><p>The analysis authors also note that the power outage was not caused by BlackEnergy, the backdoors, KillDisk, or the malicious firmware. Instead, these components of the attack were used to access the oblenergos’ systems and then delay the restoration of power.</p><p>“However, the strongest capability of the attackers was not in their choice of tools or in their expertise, but in their capability to perform long-term reconnaissance operations required to learn the environment and execute a highly synchronized, multistage, multisite attack,” according to the analysis.</p><h4>Why Ukraine? </h4><p>No one has claimed responsibility for the attack on Ukraine’s electric grid. Ukraine’s Security Service has pointed a finger at Russia, but has not offered publicly available evidence to corroborate that claim.</p><p>However, there are many reasons that an attacker would see Ukraine as an attractive target for this kind of cyberattack, says Ernie Dennis, a cyber intelligence analyst at the Retail Cyber Information Sharing Center who was formerly with Arbor Networks.</p><p>Russia annexed part of Ukraine—Crimea—in 2014 and has stationed military troops along the border of eastern Ukraine since then. After the annexation occurred, there was not a great deal of pushback from the European Union or the United States, except in the form of sanctions. </p><p>If Russia had been developing the ability to conduct a cyberattack on an electric grid, and wanted to test the method and face few consequences for doing so, targeting Ukraine might be a good idea, Dennis says.</p><p>“Ukraine makes a great playground to test your neighbor’s resiliency to push more boundaries,” he explains. “If [the attackers] were to have done this in a legitimate European Union nation or a NATO ally, there’s a whole lot of other concerns that they have to worry about.”</p><p>Those concerns include being able to stay on the distributor’s network, facing a more robust defensive posture, and retaliation.</p><p>“But if you muck around in a country you’re already playing around in, and you haven’t had any issues, why not push it a little bit further and see what else you can get away with?” Dennis adds.</p><p>His thinking is in line with findings from Booz Allen Hamilton, which released the report When the Lights Went Out: A Comprehensive Review of the 2015 Attacks on Ukrainian Critical Infrastructure. The report says the December 2015 cyberattack was just the latest in a series of attacks.</p><p>“This long-running campaign likely reflects a significant, concerted effort by a single threat actor with a well-organized capability and interest in using cyberattacks to undermine Ukraine’s socio-political fabric,” the report says. </p><p>For instance, other cyberattacks were carried out against Ukraine’s electric sector, railway sector, television sector, mining sector, and regional government and public archives beginning in 2014. BlackEnergy—the malware used in the December 2015 cyberattack—was used in some of these previous attacks.</p><p>These attacks could have been undertaken to send a message because they were not designed to provide the attackers with a financial return, says the report.</p><p>“While politically motivated cyberattacks are not a novel foreign policy tool, the industries and organizations that serve as potential targets are expanding,” the report says. “Cyberattacks present a powerful political tool, particularly those against critical infrastructure providers. Industrial control systems operators are not above the fray in geopolitical rows, and may in fact be the new primary target.”</p><h4>What the Hack Means for Defenders</h4><p>While it’s not definite who was behind the December 2015 cyberattack, the culprit was well-resourced, well-organized, and able to identify the biggest points of failure in Ukraine’s electric grid system: the operator’s security posture that allowed remote access to the control environment without two-factor authentication.</p><p>The attack also marked an escalation from previous destructive attacks that targeted computers and servers—like the Saudi Aramco hack in 2012 and the Sony Pictures attack in 2014.</p><p>“Several lines were crossed in the conduct of these attacks, as the targets could be described as solely civilian infrastructure,” the SANS report found. “Historic attacks, such as Stuxnet [attack on Iran’s nuclear program]…could be argued as being surg­ically targeted against a military target.”</p><p>Some areas of the world also might be at greater risk of a similar type of cyberattack, Dennis says.</p><p>“If someone really wanted to affect Africa and take out the power, I believe that they would have similar success to what they did in Ukraine,” he explains. “The reason why the United States and the European Union are so headstrong about their power infrastructure is because they know for a fact that they’ve taken the time, money, and effort to make it robust and secure, in light of ongoing thoughts of doom and gloom that it could happen any day.”</p><p>A destructive cyberattack has not hit U.S. critical infrastructure, but in fiscal year 2015, members of the U.S. energy sector reported 46 cybersecurity incidents to the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), according to the Booz Allen report.</p><p>“ICS-CERT does not publish a breakdown of the types of incidents by sector, but it revealed that 31 percent of total incidents reported across all sectors involved successful intrusion into operators’ assets, a third of which included accessing control systems,” the report says. </p><p>One of the few disclosed incidents was a BlackEnergy campaign that the U.S. government suspected was sponsored by the Russian government. However, the campaign did not attempt to “damage, modify, or otherwise disrupt” the electric grid.</p><p>This type of campaign is in line with the findings from a DHS Office of Intelligence and Analysis intelligence assessment that found that the “threat of a damaging or disrupting cyberattack against the U.S. energy sector is low.”</p><p>Nation-state cyber actors are targeting the U.S. energy sector enterprise networks, the report found, but mainly to conduct cyber espionage.</p><p>“The APT activity directed against sector industrial control system networks probably is focused on acquiring and maintaining persistent access to facilitate the introduction of malware, and likely is part of nation-state contingency planning that would only be implemented to conduct a damaging or disruptive attack in the event of hostilities with the United States,” the assessment says. </p><p>The DHS analysis was released in the spring of 2016, and DHS did not respond to requests for an updated threat analysis for this article. </p><p>However, other experts doubt that an attack—like the one against Ukraine—would be effective against the U.S. or Canadian electric grids because regulators have taken steps to address cyber risks to the grid.</p><p>In 2006, NERC started the effort to create reliability standards for cybersecurity for the North American bulk power system, which is a major target with more than 450,000 miles of high voltage transmission lines and more than 55,000 transmission substations, says Brian Harrell, CPP, director of security and risk management for Navigant Consulting and former director of critical infrastructure protection programs at NERC.</p><p>“NERC and the industry have gone through multiple iterations of mandatory Critical Infrastructure Protection Standards (CIPS) that focus on security protections,” Harrell says. Not complying with these standards can result in fines of up to $1 million per day, per violation. </p><p>And, Harrell adds, “it’s important to remember that these are minimum standards, and should be looked </p><p>at as a baseline from which to im­prove. Utilities should constantly be assessing their systems, patching their software, and testing their recovery procedures.”</p><p>Also aiding the United States in preventing a similar attack from being effective is a robust information sharing system between NERC, the E-ISAC, the  federal government, and the private sector. </p><p>“Over the past few years, DHS, the FBI, and the U.S. Department of Energy have made considerable strides in improving information sharing and giving classified access to intelligence products, such as  bulletins, alerts, and secret-level briefings,” Harrell says. “These data points have been used to mitigate threats, reduce risk, and update internal security policies.”</p><p>This system exists in the United States and NERC is working with the Canadian government and Canadian power companies to create a similar information sharing network, Sachs says. </p><p>However, Sachs says it’s important that these information sharing centers remain a voluntary practice for private companies to participate in.</p><p>“There’s very little critical infrastructure that’s government owned, and that’s frustrating because you can’t really demand the private sector share with the government, because if you do that, they will only share the bare minimum required to meet the law,” Sachs explains. “You want to encourage voluntary sharing, that way they’ll share more.”</p><p>To help bolster the electric grid in the United States and Canada, NERC has sponsored four biennial exercises, called GridEx, to provide utility operators with the opportunity to demonstrate how they would respond to and recover from a simulated coordinated cyber and physical security threat. </p><p>The first exercise took place in November 2011, and NERC will hold its next exercise—GridEx IV—in November 2017. NERC will provide participants with a detailed scenario that grid operators can then adapt to their own training needs, Sachs says.</p><p>“We try to build an exercise that stresses the operator community, makes them think about how they would respond and not so much looking into how the electricity is turned off,” Sachs says. “This helps eliminate people reading into a scenario and saying, ‘Well, that physically can’t happen.’”</p><p>But the final factor that bolsters North America’s electric grid security is the fact that it is a mostly privately owned and operated system that is diverse, despite its regulatory framework.</p><p>“Even though we may agree on what the outcome needs to look like, we will allow an asset owner to have maximum flexibility in designing a system that can achieve that outcome,” Sachs says. “So then you have all these different approaches, and a bad actor who is trying to get in, if he finds success somewhere, that success isn’t necessarily going to work elsewhere because the approaches were different.”</p><p>The North American system wasn’t initially designed to be diverse, Sachs says, but was instead designed to be resilient and adapt to problems.</p><p>“What tends to work here is you adapt the design of the grid to the local conditions, and working on our behalf in North America is the culture in the U.S. and Canada of diversity—a culture that says, ‘It’s okay to do things differently. We don’t have to be uniform, by the book, precise,’” Sachs says. </p><p>And this diversity in the design and implementation of security makes the North American grid more secure, Sachs says, because an attacker couldn’t use the exact same approach to take down multiple aspects of the grid.</p><p>“But that also doesn’t mean we turn off our vigilance,” Sachs adds. “When we’re up against a thinking enemy—a human mind—the defenders have to be on the lookout for new methods on the attacker side and never let their guard down. They have to use the strengths they have, and diversity is one of those big strengths.” </p><br>’-Security-Solutions-Are-Outdated.aspx2017-07-14T04:00:00ZReport: Most InfoSec Professionals Think Their Companies’ Security Solutions Are Outdated Releases Digital Identity Guidelines Review: Info Risk Y Yo Course for Corporate Success's-Note---A-Stronger-Web.aspx2017-07-01T04:00:00ZEditor's Note: A Stronger Web’s-Game-Day-Solutions.aspx2017-07-01T04:00:00ZHouston’s Game Day Solutions Road to Resilience False Alarms Review: In Pursuit of Foresight Play: Resilience & Infrastructure Review: Resilience in Asia 2017 SM Online 2017 Legal Report Resources 2017 Legal Report

 You May Also Like... Trends<p>​<span style="line-height:1.5em;">In r</span><span style="line-height:1.5em;">ecent years, security professionals have been bombarded with rules and regulations on corruption as well as court rulings on discrimination and harassment. The upcoming compliance trend centers around safety and health. A new rule on reporting workplace fatalities, injuries, and illnesses will bring workplace safety practices under scrutiny. Almost 5,000 U.S. employees were killed at work in 2014, a 5 percent increase from the number of reported fatal work injuries in 2013. And nearly 3 million people experienced a workplace injury or illness in 2014, according to the U.S. Department of Labor’s (DOL) Bureau of Labor Statistics (BLS). </span></p><p>To make data about these incidents more accessible to the public, the DOL’s Occupational Safety and Health Administration (OSHA) issued a final rule, Improve Tracking of Workplace Injuries and Illnesses, in May 2016, that requires many employers to electronically submit information about workplace injuries and illnesses to the government. The government, in turn, will then make this information available online in a public database.</p><p>“Since high injury rates are a sign of poor management, no employer wants to be seen publicly as operating a dangerous workplace,” Assistant Secretary of Labor for Occupational Safety and Health Dr. David Michaels said in a statement. “Our new reporting requirements will ‘nudge’ employers to prevent worker injuries and illnesses to demonstrate to investors, job seekers, customers, and the public that they operate safe and well-managed facilities.”</p><p>Additionally, Michaels said that greater access to injury data will also help OSHA better target compliance assistance and enforcement resources to “establishments where workers are at greatest risk, and enable ‘big data’ researchers to apply their skills to making workplaces safer.”​</p><h4>What’s in the new rule?</h4><p>Under the Occupational Safety and Health Act of 1970, employers are responsible for providing a safe workplace for employees. As part of this act, OSHA already required many employers to keep a record of injuries and illnesses, identify hazards, fix problems, and prevent additional injuries and illnesses. </p><p>Under the new rule, all employers with 250 or more employees at a single facility covered by the recordkeeping regulation must electronically submit injury and illness information to OSHA in three forms: 300 (log of work-related illnesses and injuries), 300A (summary of work-related illnesses and injuries), and 301 (injury and illness incident report).</p><p>OSHA argues that, together, these forms will paint a picture of the number of injuries, number of fatalities, lost time, total lost days, total restricted work days, and the total number of employees at each location of a company.</p><p>And OSHA will be able to use it to answer certain questions. For example, within a given industry, what are the characteristics of establishments with the highest injury and illness rates? What are the characteristics of establishments with the lowest rates of injuries and illnesses? What is the relationship between an establishment’s injury and illness data and data from other agencies?</p><p>Facilities with 20 to 249 employees in certain high-risk industries will also be required to submit information from form 300A electronically. These are 67 industries identified by OSHA that have historically high rates of occupational injury and illness, including manufacturing, construction, urban transit systems, utilities, and more.</p><p>The requirement for facilities to submit the 300A summaries electronically goes into effect on July 1, 2017. If required, facilities must submit forms 300 and 301 electronically by July 1, 2018, and will be required to submit all three forms electronically by March 2, 2019.</p><p>OSHA will upload this data, after ensuring that no personally identifiable information is included, to a publicly accessible database. The details of the database, however, have not yet been released because OSHA is still creating it.</p><p>OSHA’s mission is to protect the safety and health of workers. This new rule, OSHA’s Office of Communications tells Security Management, will support that mission.</p><p>First, as previously noted, access to injury data will help OSHA better target compliance assistance and enforcement resources to establishments where workers are at greatest risk.</p><p>“The final rule’s provisions requiring regular electronic submission of injury and illness data will allow OSHA to obtain a much larger data set of more timely, establishment-specific information about injuries and illnesses in the workplace,” the rule says. “This information will help OSHA use its enforcement and compliance assistance resources more effectively by enabling OSHA to identify the workplaces where workers are at greatest risk.”</p><p>One example OSHA gives in the rule itself is that the data will help it identify small and medium-sized employers who report high overall injury and illness rates for referral to its consultation program. </p><p>“OSHA could also send hazard-specific educational materials to employers who report high rates of injuries or illnesses related to those hazards, or letters notifying employers that their reported injury and illness rates were higher than the industry-wide rates,” the rule explains.</p><p>The practice of sending high-rate notification letters, for instance, has been associated with a 5 percent decrease in lost workday injuries and illnesses in the following three years, OSHA says.</p><p>OSHA also maintains that publicly disclosing work injury data will encourage employers to prevent work-related injuries and illnesses.</p><p>The new reporting requirements are also designed to save government time and money. The agency believes that the new rule will convince “employers to abate hazards and thereby prevent workplace injuries and illnesses, without OSHA having to conduct onsite inspections.” ​</p><h4>What else does the rule do?</h4><p>Along with the electronic reporting requirements, the rule also reemphasizes whistleblower provisions for employees to report injury and illness without fear of retaliation. </p><p>“The rule clarifies the existing implicit requirement that an employer’s procedure for reporting work-related injuries and illnesses must be reasonable and not deter or discourage employees from reporting,” the office explains. “It also incorporates the existing statute that prohibits retaliation against employees for reporting work-related injuries or illnesses.” </p><p>Including the term “reasonable” is new for OSHA, says Edwin Foulke, Jr., partner at Fisher Phillips who cochairs the firm’s Workplace Safety and Catastrophe Management Practice Group and who was the head of OSHA from 2006 to 2008. </p><p>“Before, you were required to make sure that your employees knew that there was a system to report,” he adds. Now, however, OSHA requires that that system be a reasonable one.</p><p>While it is unclear how exactly OSHA is defining “reasonable,” it does explain in the rule that “for a reporting procedure to be reasonable and not unduly burdensome, it must allow for reporting of work-related injuries and illnesses within a reasonable timeframe after the employee has realized that he or she has suffered a work-related injury or illness.”</p><p>If employers are caught discouraging employees from reporting illness or injury, they can be cited by OSHA for retaliation. “Before, the employee had to file a complaint. Now, for an employer to get cited and to be penalized, OSHA can do that in an inspection under this new standard,” Foulke says. “So this is a whole new area, and they’re going to be looking.” </p><p>Actions that could be considered retaliation include termination, reduction in pay, reassignment to a less desirable position, or any other adverse action that “could well dissuade” a reasonable employee from making a report, the rule explains.</p><p>OSHA also has taken the stance in the rule that “blanket post-injury drug testing policies deter proper reporting” of workplace injuries and illnesses. Because of this, the rule prohibits employers from using drug testing—or the threat of drug testing—as a form of adverse action against employees who report injuries or illnesses.</p><p>“To strike the appropriate balance here, drug testing policies should limit post-incident testing to situations in which employee drug use is likely to have contributed to the incident, and for which the drug test can accurately identify impairment caused by drug use,” the rule says. </p><p>For instance, OSHA says it would not be reasonable to drug-test an employee who reports a bee sting or a repetitive strain injury. </p><p>“Such a policy is likely only to deter reporting without contributing to the employer’s understanding of why the injury occurred, or in any other way contributing to workplace safety,” OSHA explains.</p><p>However, if workers’ compensation laws require an employer to conduct drug testing, then this type of drug testing would not be considered retaliatory, OSHA adds.​</p><h4>What should employers do? </h4><p>Because of potential liability and opportunities for citations, Foulke recommends that companies take several actions in response to the new rule. </p><p>For instance, employers should look at how they advise their employees to report injuries and illnesses under the record keeping standard. OSHA has said that companies can meet this requirement by posting the “Job Safety and Health—It’s the Law” workers’ rights poster from April 2015.</p><p>Employers should make sure that their reporting process is “reasonable and doesn’t somehow discourage people, because, if it is, they are going to get cited for it and maybe open themselves up to a whistleblower retaliation claim,” according to Foulke.</p><p>A whistleblower retaliation claim could be likely because this is an issue that OSHA has been increasingly focused on during the Obama administration’s second term, he says. </p><p>Employers also need to know their rights during an OSHA inspection, a process that many are unfamiliar with. For example, Foulke says that when OSHA comes in to do an inspection based on a complaint it has received, it will frequently attempt to expand the visit into a “wall-to-wall” inspection.</p><p>“If the employer doesn’t assert their rights and allows a wall-to-wall, then potentially they could have many more citations,” Foulke explains.</p><p>Additionally, the business community has expressed concerns that the new rule will force them to publicly reveal secret business details that were previously considered privileged and confidential.</p><p>“When you fill out the 300 logs and also the 300A summaries, they are going to talk about departments and processes—especially in the 301, you may have some information that may be somewhat proprietary,” Foulke says. “Employers are going to have to be very careful about what they put when they’re submitting their data, that they basically look and provide only the minimum that they are required to provide.”</p><p>And employers should also recognize how the data they submit to OSHA may be used once it is publicly available. This is because using the information from the 300 and 301 forms, analysts will be able to determine the death, injury, and illness rate of a particular company to compare it to the industry average. </p><p>“Now that data could be used by union organizers who want to try to organize a company to show how bad at safety they are,” Foulke explains. “They can take that data and say, ‘Look how many injuries and illnesses this company has.’”</p><p> “Plaintiffs’ lawyers could look at it and say, ‘Look at this company. They have all these injuries there. Obviously something is going on there, so I need to go out to that plant, find one of those employees who got injured, and throw a class action against the company for all these injuries,’” Foulke says.   ​</p>GP0|#28ae3eb9-d865-484b-ac9f-3dfacb4ce997;L0|#028ae3eb9-d865-484b-ac9f-3dfacb4ce997|Strategic Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465 Search of Security Metrics<p>At a major insurance company headquartered in the Midwestern United States, the assistant vice president for corporate security has used an environmental risk metric for the past 12 years to help the company decide where to place office facilities around the country. The company owns or leases hundreds of facilities across the United States. Corporate security regularly collects a suite of data, assigns weights to various factors, and develops a numeric score that places each facility into a low, medium, or high category of risk. For each risk category, written policy specifies a cluster of security measures that should be in place at the site. Exceptions can be granted, but the systematic approach results in uniformity and in efficiency in decision-making and security systems contracting. Most importantly, the metrics-based approach helps senior management understand the level of risk in site selection and make informed decisions on risk management. In addition, over time, the metrics have steered the corporation toward having a smaller percentage of its locations in high-risk sites.</p><p>This example illustrates how security professionals can use metrics to determine what works, measure the value of security operations, and demonstrate security's alignment with its organization's objectives. To help security managers use metrics more effectively, the ASIS Foundation funded research to create tools for discovering, developing, assessing, improving, and presenting security metrics. By using the tools, security professionals may be better positioned to manage their operations, measure their effectiveness, and communicate with senior management. </p><p>Metrics are measurements or other objective indicators collected over time to guide decision-making. The term is sometimes used interchangeably with measurements, analytics, and performance measures. With metrics, security managers can speak to senior leaders in familiar business language, offering measurable results that correlate with investment. Without compelling metrics, security managers and their budgets rely largely on the intuition of company leadership. </p><p>Two years ago, the ASIS Foundation implemented a new structure for assessing and overseeing security research. The first test of that structure was a proposal for research on security metrics, says Linda F. Florence, Ph.D, CPP, president, ASIS Foundation Board of Trustees. "The ASIS International Defense and Intelligence Council had a special interest in the topic, having made several presentations on metrics at the ASIS Annual Seminar and Exhibits. The council formed a vision of what the security field needed, found researchers who could perform the work, and helped the researchers develop a proposal for ASIS Foundation funding."</p><p>The Foundation Research Council approved the proposal, and the Foundation sought and received funding from the ASIS Board of Directors. The result was the ASIS Foundation Metrics Research Project. The Foundation awarded a grant to Global Skills X-Change (GSX) and Ohlhausen Research to undertake the project. GSX specializes in applying validation, measurement, and standards development techniques to produce business tools. Ohlhausen Research, Inc., conducts research in security, criminal justice, and technology.</p><h4>Depth Perception<br><br></h4><p>The project's research team consisted of the author as principal investigator; subject matter expert and former Director of Information Protection for the U.S. Air Force Daniel McGarvey; Senior Analyst Megan Poore; and Technical Advisor Lance Anderson, Ph.D.</p><p>Throughout the research, which be­gan in 2013, the ASIS Defense and Intelligence Council ensured that the security practitioner's point of view was represented by serving on the project's advisory board and expert panel.</p><p>The researchers gained insights into security metrics through a systematic review of the literature, an online sur­vey of ASIS members, and lengthy fol­low- up interviews by phone. In addition, the research team was guided by an advisory board and an expert panel composed of security professionals with experience in the use of metrics. The project was completed in the spring of 2014.</p><p>The research found many books, articles, and reports discussing reasons to use metrics, characteristics of existing metrics, and methods for communicating metrics. Among the most valuable resources on security metrics were George Campbell's <em>Measures and Metrics in Corporate Security: Communicating Business Value</em> and Mary Lynn Garcia's <em>The Design and Evaluation of Physical Protection Systems</em>, as well as numerous articles in both <em>Harvard Business Review</em> and <em>MIT Sloan Management Review</em>—the latter on business metrics generally.</p><p>This noted, most sources that examine security metrics operate at a conceptual level only. The literature has few specific strategies for developing or evaluating security metrics. Likewise, descriptions of empirically sound security metrics with statistical justification and evidence are scarce. </p><p>To uncover specific uses of security metrics and to gain an understanding of the different ways in which security professionals may be using metrics, the research team invited more than 3,000 ASIS members to participate in an online survey. The survey's 20 questions asked about metrics collection, comparison to external benchmarks, return on investment, sharing and presentation of metrics, and alignment with organizational risks and objectives. The survey also examined the particulars of metrics usage among respondents.</p><p>The 297 respondents demonstrated a high degree of interest in metrics. Of the respondents who said they are not using security metrics, 78 percent said they would use metrics if they knew more about how to create and use them effectively. More than half of all respondents asked for more information from ASIS regarding metrics.</p><p>Respondents provided the research team with a detailed view of the many ways that security professionals are using metrics today, including focusing on topics, reporting data, sharing with the C-suite, aligning with organizational risk, and using a dashboard tool.</p><p><strong>Metrics topics.</strong> Respondents were asked which aspects of the security program they measure. The top five categories were security incidents, criminal incidents and investigations, cost against budget, security training and education, and guarding performance, which includes turnover and inspections. </p><p><strong>Reporting.</strong> Eighty percent of respondents who use metrics provide their metric findings to persons outside the security department. Recipi­ents of the information include senior management (79 percent of those who share metrics outside the security department), managers of other departments (59 percent), supervisors (51 percent), and people who report to the security department (47 percent). Those who share metrics provide the information quarterly (43 percent), monthly (40 percent), or annually (17 percent).</p><p><strong>Sharing.</strong> Respondents who share metrics with C-suite personnel were asked which elements they share. The top choices were security incidents (80 percent), cost against budget (62 percent), criminal incidents and investigations (57 percent), regulatory compliance (44 percent), and risk analysis process (40 percent).</p><p><strong>Alignment.</strong> Eighty percent of respondents who use metrics said that their metrics are tied to, aligned with, or part of the larger organizational risk process or organiza­tional objectives. For example, some metrics protect the company's most important product line; other metrics may support business continuity, compliance, risk management, or client satisfaction. One respondent explained that top management sets broad goals and writes plans while se­cu­rity metrics demonstrate how effective those plans are.</p><p><strong>Dashboard tool.</strong> Forty-four percent of respondents who use metrics perform their data collection, review, or sharing via a security management dashboard tool.</p><p>This research makes it possible to clearly define security's role and contribution to the organization at the tactical, organizational, and strategic levels. The report provides a working metrics tool that can help practitioners use metrics in the most effective manner. </p><h4>In the Tool Belt<br><br></h4><p>GSX and Ohlhausen Research studied the current uses of security metrics and created several resources for practition­ers. The Security Metrics Evaluation Tool (Security MET) helps security pro­fessionals develop, evaluate, and improve security metrics. A library of metric descriptions, each evaluated according to the Security MET criteria, provides valuable resources. Guidelines for using metrics can help security professionals inform and persuade senior management.</p><p>The tools, especially the Security MET, are designed to help security managers assess and refine metrics that they are using or considering, based on an intimate knowledge of conditions at their organization, in a manner guided by scientific assessment methods. </p><p><strong>Security MET.</strong> The Security MET is meant to aid and empower the security manager, not to dictate any particular security decision. By providing a standard for scientific measurement, it offers guidance for improving the inputs that go into the security professional's own decision-making process.</p><p>The Security MET is a written instrument that security managers can use to assess the quality of specific security metrics. Users can determine whether an existing or proposed metric possesses scientific validity, organizational rele­vance (such as clear alignment with corporate risks or goals), return on investment, and practicality.</p><p>The tool was developed through a comprehensive, iterative process that involved synthesizing scientific literature, reviewing security industry standards, and obtaining input from metrics experts on the project's advisory board and expert panel. Many of the criteria come from the field of psychometrics, which is concerned with the measurement of mental traits, abilities, and processes. The psychometric literature addresses the measurement of complex human behaviors, including sources of error inherent in social and organizational situations. In addition, through its connection with legal guidelines and case law, psychometric theory provides ways to address complicated legal issues related to fairness and human error.</p><p>The tool presents nine criteria for evaluating a security metric. The criteria fall into three groups: technical, operational, and strategic.</p><p><em>Technical.</em> The technical criteria include reliability, validity, and generaliz­ability. Reliability means the degree to which the metric yields consistent scores that are unaffected by sources of measurement error. Validity refers to the degree to which evidence based on theory or quantitative research supports drawing conclusions from the metric. Generalizability means the degree to which conclusions drawn from the metric are consistent and applicable across different settings, organizations, timeframes, or circumstances.</p><p><em>Operational.</em> Operational criteria include the monetary and nonmonetary costs associated with metric development and administration, as well as timeliness and the extent to which metric data can be manipulated, coached, guessed, or faked by staff.</p><p><em>Strategic.</em> Strategic criteria include return on investment, organizational relevance, and communication. Return on investment is the extent to which a metric can be used to demonstrate cost savings or loss prevention in relation to relevant security spending. Organizational relevance is the extent to which the metric is linked to organizational risk management or a strategic mission, objective, goal, asset, threat, or vulnerability relevant to the organization—in other words, linked to the factors that matter the most to senior management. Communication refers to the extent to which the metric, metric results, and metric value can be communicated easily, succinctly, and quickly to key stakeholders, especially senior management.</p><p>A score sheet is presented at the end of the Security MET. The instrument is easy to score and imposes little to no time burden on staff. Lower scores on particular criteria show where a metric has room for improvement. </p><p>Here's an example of how the Security MET can be used to evaluate a real-life metric. At a major financial services firm, employees were being robbed of their mobile phones on the sidewalks all around the office as they came to work, when they went outside for lunch, or when they left to go home. The firm identified hot spots and times for phone theft and applied extra security measures. After reaching a maximum of 40 thefts in a two-month period, the number soon declined to zero.</p><p>Evaluating the metric with the Security MET provides some valuable insights. The metric—the number of mobile phone thefts—is highly reliable, as it is based on incident reports from employee victims, police reports, and video surveillance. Its validity appears to be confirmed by the outcome—that problem was eliminated. Collecting the data has little marginal cost, as the company already tracks and trends security incidents. Its organizational relevance is high, as it aligns with the firm's goal of attracting workers to the central business district. As for communication, it is a straightforward metric that is easy to explain. In terms of return on investment, it is hard to quantify the value of keeping employees safe and continuing to attract new employees.</p><p>Thus, while the metric appears to present a reasonable return on investment, the Security MET helps the user see that developing clear proof of ROI would be one way to strengthen this particular metric. The addition of a short survey asking employees if they feel more se­cure and would recommend the company to others would provide validation for both the solution and the metric.</p><p><strong>Metrics library.</strong> The researchers de­veloped 16 summaries of metrics currently in use in the security field. The summaries were developed primarily through telephone interviews with on­line survey respondents. The summaries may serve as examples for security pro­fessionals who are considering ways to use metrics. (See box on page 58 for a complete list of topics.)</p><p>The library presents a three- to four-page summary of each metric. In addition, each metric is evaluated by several metrics experts, using the Security MET. The metrics library is presented in the full project report.</p><p>These real-world metrics come from a variety of industries including defense/aerospace, energy/oil, finance, government, insurance, manufacturing, pharmaceuticals, real estate management, retail, security services, shipping/logistics, and telecommunications.</p><p>Some of the metrics are more sophisticated and detailed than others, providing a range of examples for potential users to consider. The metrics are not presented as models of perfection. Rather, they are authentic examples that security professionals can follow, refine, or otherwise adapt when developing their own metrics.</p><p><strong>Guidelines.</strong> A key task in this research was to develop guidelines for effectively using security metrics to persuade senior management. What would make those presentations more compelling? Several recommendations emerged.</p><p>Present metrics that are aligned with the organization's objectives or risks or that measure the specific issues in which management is most interested. One of the most important measures is return on investment (ROI).</p><p>Present metrics that meet measurement standards. A metric may be more persuasive to senior management if it has been properly designed from a scientific point of view and has been evaluated against a testing tool, such as the Security MET, or established measurement and statistical criteria.</p><p>Tell a story. If the metric is prevention-focused, a security professional can make the metric compelling by naming the business resources threatened, stat­ing the value of those resources, and describing the consequences if the event occurs. Another part of a compelling story is the unfolding of events over time. Metrics can show progress toward a specific strategic goal. </p><p>Use graphics and keep presentations short. Senior managers may be interested in only a few key measures. While security professionals may choose to monitor many metrics via a dashboard interface, they should create a simpler dashboard for senior management. Some security professionals said they limit their presentations to five minutes.</p><p>Present metric data regularly. As data ages it becomes more historical, less actionable, and thus potentially less valuable. The research does not suggest an optimal interval for sharing security metrics with senior management, but the survey shows that 83 percent of security professionals who share metrics outside the department do so at least quarterly. </p><p>Future steps for helping security professionals improve their use of metrics include a webinar sponsored by the ASIS Defense and Intelligence Council and the further development of the metrics library. Other ideas under consideration include metrics training for security practitioners, the development of a tool for creating a metric from scratch and implementing it in an organization, and the creation of a library of audited— not merely self-reported—metrics. </p><p>The best security practice is evi­dence-based; without research, practitioners must rely on anecdotal information to make decisions. The ASIS Foundation continues to seek ideas for research projects that would increase security knowledge and help security professionals perform their work more effectively. </p><p>The complete project report, <em>Persuading Senior Management with Effective, Evaluated Security Metrics</em>, is available as a free download. The 196-page report contains the full text of the Security MET, the library of metric summaries (with evaluations), guidelines for presenting metrics to senior management, the project's literature review, and detailed results of the online survey.</p><p>Florence says, "We are proud to brand this quality research with the ASIS Foundation logo and share the findings with our members and the security profession as a whole. This research will help propel security from an industry to a profession, where we belong."  <br></p><p>Peter E. Ohlhausen is president of Ohlhausen Research, Inc., and served as principal investigator for the ASIS Foundation Metrics Research Project. He is a member of ASIS.</p>GP0|#28ae3eb9-d865-484b-ac9f-3dfacb4ce997;L0|#028ae3eb9-d865-484b-ac9f-3dfacb4ce997|Strategic Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465 Target Trends<p>When most people think of Orlando, Florida, Walt Disney World Resort comes to mind. The world-renowned theme park makes Orlando the second most popular travel destination in the United States. But there is much more to the city than Mickey and Minnie Mouse. </p><p>Beyond the complex infrastructure that supports Orlando’s 2.3 million citizens, the city is filled with parks and wildlife, the largest university in the country, and a vast hospitality industry that includes more than 118,000 hotel rooms. And International Drive, an 11-mile thoroughfare through the city, is home to attractions such as Universal Orlando Resort, SeaWorld Orlando, and the Orange County Convention Center, the site of ASIS International’s 62nd Annual Seminar and Exhibits this month. </p><p>Hospitality goes hand-in-hand with security in Orlando, where local businesses and attractions see a constant flow of tourists from all over the world. And at the Dr. Phillips Center for the Performing Arts, which hosts events ranging from Broadway shows to concerts to community education and events, a new security director is changing the culture of theater to keep performers, staff, and visitors safe.​</p><h4>The Living Room of the City</h4><p>Open since November 2014, the Dr. Phillips Center spans two blocks and is home to a 2,700-seat main stage, a 300-seat theater, and the Dr. Phillips Center Florida Hospital School of the Arts. The building’s striking architecture, which includes a canopy roof, vast overhang, and a façade made almost entirely of glass, stretches across two blocks and is complemented by a front lawn and plaza.</p><p>After the June 11 shooting at Pulse nightclub less than two miles south of the theater, that lawn became the city’s memorial. Days after the shooting, the Dr. Phillips Center plaza, normally used for small concerts or events, hosted Orlando’s first public vigil. A makeshift memorial was established on the lawn, and dozens of mourners visited for weeks after the attack.</p><p>Chris Savard, a retired member of the Orlando Police Department, started as the center’s director of security in December, shortly after terrorists killed dozens and injured hundreds in attacks on soft targets in Paris. Prior to Savard, the center had no security director. Coming from a law enforcement background to the theater industry was a challenging transition, he says. </p><p>“Before I came here, I was with an FBI terrorism task force,” Savard says. “Bringing those ideologies here to the performing arts world, it’s just a different culture. Saying ‘you will do security, this is the way it is’ doesn’t work. You have to ease into it.”</p><p>The Dr. Phillips Center was up and running for a year before Savard started, so he had to focus on strategic changes to improve security: “The building is already built, so we need to figure out what else we can do,” he says. One point of concern was an overhang above the valet line right at the main entrance. Situated above the overhang is a glass-walled private donor lounge, and Savard notes that anyone could have driven up to the main entrance under the overhang and set off a bomb, causing maximum damage. “It was a serious chokepoint,” he explains, “and the building was designed before ISIS took off, so there wasn’t much we could do about the overhang.”</p><p>Instead, he shifted the valet drop-off point, manned by off-duty police officers, further away from the building. “We’ve got some people saying, ‘Hey, I’m a donor and I don’t want to walk half a block to come to the building, I want to park my vehicle here, get out, and be in the air conditioning.’ It’s a tough process, but it’s a work in progress. Most people have not had an issue whatsoever in regards to what we’ve implemented.”</p><p>Savard also switched up the use of off-duty police officers in front of the Dr. Phillips Center. He notes that it can be costly to hire off-duty police officers, who were used for traffic control before he became the security director, so he reduced the number of officers used and stationed them closer to the building. He also uses a K-9 officer, who can quickly assess a stopped or abandoned vehicle on the spot. </p><p>“When you pull into the facility, you see an Orlando Police Department K-9 officer SUV,” Savard explains. “We brought two other valet officers closer to the building, so in any given area you have at least four police cars or motorcycles that are readily available. We wanted to get them closer so it was more of a presence, a deterrent.” The exact drop-off location is constantly changing to keep people on their toes, he adds.</p><p>The Dr. Phillips Center was already using Andy Frain Services, which provides uniformed officers to patrol the center around the clock. Annette DuBose manages the contracted officers. </p><p>When he started in December, Savard says he was surprised that no bag checks were conducted. When he brought up the possibility of doing bag checks, there was some initial pushback—it’s uncommon for theater centers to perform any type of bag check. “In the performing arts world, this was a big deal,” Savard says. “You have some high-dollar clientele coming in, and not a lot of people want to be inconvenienced like that.”</p><p>When Savard worked with DuBose and her officers to implement bag checks, he said everyone was astonished at what the officers were finding. “I was actually shocked at what people want to bring in,” Savard says. “Guns, knives, bullets. I’ve got 25-plus years of being in law enforcement, and seeing what people bring in…it’s a Carole King musical! Why are you bringing your pepper spray?”</p><p>Savard acknowledges that the fact that Florida allows concealed carry makes bag checks mandatory—and tricky. As a private entity, the Dr. Phillips Center can prohibit guns, but that doesn’t stop people from trying to bring them in, he notes. The Andy Frain officers have done a great job at kindly but firmly asking patrons to take their guns back to their cars, Savard says—and hav­ing a police officer nearby helps when it comes to argumentative visitors.​</p><h4>Culture, Community, and Customer Service</h4><p>There have been more than 300 performances since the Dr. Phillips Center opened, and with two stages, the plaza, classrooms, and event spaces, there can be five or six events going on at once. </p><p>“This is definitely a soft target here in Orlando,” Savard notes. “With our planned expansion, we can have 5,000 people in here at one time. What a target—doing something in downtown Orlando to a performing arts center.”</p><p>The contract officers and off-duty police carry out the core of the security- related responsibilities, but Savard has also brought in volunteers to augment the security presence. As a nonprofit theater, the Dr. Phillips Center has a large number of “very passionate” volunteers—there are around 50 at each show, he says. </p><p>The volunteers primarily provide customer service, but Savard says he wants them to have a security mindset, as well—“the more eyes, the better.” He teaches them basic behavioral assessment techniques and trends they should look for. </p><p>“You know the guy touching his lower back, does he have a back brace on or is he trying to keep the gun in his waistband from showing?” Savard says. “Why is that person out there videotaping where people are being dropped off and parking their cars? Is it a bad guy who wants to do something?”</p><p>All 85 staffers at the Dr. Phillips Center have taken active shooter training classes, and self-defense classes are offered as well. Savard tries to stress situational awareness to all staff, whether they work in security or not. </p><p>“One of the things I really want to do is get that active shooter mindset into this environment, because this is the type of environment where it’s going to happen,” Savard explains. “It’s all over the news.”</p><p>Once a month, Savard and six other theater security directors talk on the phone about the trends and threats they are seeing, as well as the challenges with integrating security into the performing arts world. </p><p>“Nobody wanted the cops inside the building at all, because it looked too militant,” Savard says. “And then we had Paris, and things changed. With my background coming in, I said ‘Listen, people want to see the cops.’” </p><p>Beyond the challenge of changing the culture at the Dr. Phillips Center, Savard says he hopes security can become a higher priority at performing arts centers across the country. The Dr. Phillips Center is one of more than two dozen theaters that host Broadway Across America shows, and Savard invited the organization’s leaders to attend an active shooter training at the facility last month. </p><p>“There’s a culture in the performing arts that everything’s fine, and unfortu­nately we know there are bad people out there that want to do bad things to soft targets right now,” Savard says. “The whole idea is to be a little more vigilant in regards to protecting these soft targets.”</p><p>Savard says he hopes to make wanding another new norm at performing arts centers. There have already been a number of instances where a guest gets past security officers with a gun hidden under a baggy Cuban-style shirt. “I’ll hear that report of a gun in the building, and the hair stands up on the back of my neck,” Savard says. “It’s a never- ending goal to continue to get better and better every time. We’re not going to get it right every time, but hopefully the majority of the time.”</p><p>The Dr. Phillips Center is also moving forward with the construction of a new 1,700-seat acoustic theater, which will be completed within the next few years. The expansion allows the center to host three shows at one time—not including events in private rooms or on the plaza. Savard is already making plans for better video surveillance and increasing security staff once the new theater is built.</p><p>“We really try to make sure that every­body who comes into the building, whether or not they’re employed here, is a guest at the building, and we want to make sure that it’s a great experience, not only from the performance but their safety,” according to Savard. “It’s about keeping the bad guys out, but it’s also that you feel really safe once you’re in here.” </p>GP0|#cd529cb2-129a-4422-a2d3-73680b0014d8;L0|#0cd529cb2-129a-4422-a2d3-73680b0014d8|Physical Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465