Cybercrime

 

 

https://sm.asisonline.org/Pages/Global-Cyber-Awareness.aspxGlobal Cyber AwarenessGP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a43444652018-01-01T05:00:00Zhttps://adminsm.asisonline.org/pages/megan-gates.aspx, Megan Gates<p>​Arby's. InterContinental Hotels Group. Equifax. Deloitte. The U.S. Securities and Exchange Commission. Saks Fifth Avenue. UNC Health Care. Gmail. These are just a handful of the organizations that experienced a data breach in 2017.</p><p>But as these major cyber incidents grabbed international headlines week after week, were mulled over by regulators and legislatures across the globe, and spawned a slew of </p><p>lawsuits, many organizations continued to struggle to comprehend and manage emerging cyber risks, according to PricewaterhouseCooper's (PwC's) recent report, The Global State of Information Security Survey 2018 (GSISS). </p><p>The survey of more than 9,500 CEOs, CFOs, CIOs, CISOs, CSOs, vice presidents, and directors of IT and security practices from more than 122 countries found that the biggest potential consequences of a cyberattack were disruption of operations (40 percent), compromise of sensitive data (39 percent), harm to product quality (32 percent), physical property damage (29 percent), and harm to human life (22 percent).</p><p>"Yet despite this awareness, many companies at risk of cyberattacks remain unprepared to deal with them," the survey said. "Forty-four percent of the 9,500 executives in 122 countries surveyed by the 2018 GSISS say they do not have an overall information security strategy. Forty-eight percent say they do not have an employee security awareness training program, and 54 percent say they do not have an incident response process."</p><p>That's not to say, however, that executives assessed their preparedness uniformly across the globe. </p><p>For instance, in Japan 72 percent of organizations said they had an </p><p>overall cybersecurity strategy—possibly because cyberattacks are seen as the leading national security threat in the country. </p><p>But high preparedness does not translate into low risk for cyberattacks or incidents. The survey explained that while the United States is ranked second—behind Singapore—as the nation most committed to cybersecurity, it's still vulnerable to the number one business risk in North America: "large-scale cyberattacks or malware causing large economic damages, geopolitical tensions, or widespread loss of trust in the Internet."</p><p>The survey further explained that, based on U.S. Department of Homeland Security assessments, if more than 60 U.S. critical infrastructure entities were damaged by a single cyber incident, it "could reasonably result in $50 billion in economic damages, or 2,500 immediate deaths, or a severe degradation of U.S. national defense."</p><p>Because of this threat, PwC found that cyber resilient organizations will be those "best positioned to sustain operations, build trust with customers, and achieve high economic performance," according to the survey.</p><p>"Many organizations need to evaluate their digital risk and focus on building resilience for the inevitable," said Sean Joyce, PwC's U.S. cybersecurity and privacy leader, in the survey.</p><p>To achieve this level of resiliency, the survey results suggested seven initiatives: having leaders assume greater responsibility for building cyber resilience, digging deeper to uncover risks, engaging the board, pursuing resilience as a path to rewards, leveraging lessons learned, conducting stress-test interdependencies, and focusing on risks involving data manipulation and destruction. </p><p>Leadership. Most organizations' boards are not shaping their security strategies or investment plans—just 44 percent of GSISS respondents said that their corporate boards actively participate in their companies' overall security strategy.</p><p>"Senior leaders driving the business must take ownership of building cyber resilience," the report said. "Establishing a top-down strategy to manage cyber and privacy risks across the enterprise is essential. Resilience must be integrated into business operations."</p><p>The board and the CEO must drive this philosophy from the top down to accomplish this, says Ryan LaSalle, security growth and strategy lead at Accenture.</p><p>"If security is kind of an outsourced risk manager, where you throw risk over the wall and hope security catches it, it fails," he explains. "The only way security becomes more effective after all the innovation and investment's gone into it is if the business is accountable for it, and it's across the business."</p><p>Risks. Cybersecurity threats change daily, and organizations that want to increase their resiliency will need to uncover and manage new risks in new technologies. One of those risks includes those associated with the Internet of Things (IoT) ecosystem. </p><p>But few survey respondents said their organizations are planning to assess IoT risks. Respondents were also divided on who was responsible for assessing IoT risk in their organization: 29 percent said it belonged to the CISO, 20 percent said it belonged to the engineering staff, and 17 percent said it belonged to the chief risk officer.</p><p>"Many organizations could manage cyber risks more proactively," the survey found. "Many key processes for uncovering cyber risks in business systems—including penetration tests, threat assessments, active monitoring of information security, and intelligence and vulnerability assessments—have been adopted by less than half of survey respondents."</p><p>One reason this might be the case is because many organizations are only addressing cybersecurity in a reactive manner, said Christopher Valentino, director of joint cyberspace programs and technical fellow at Northrop Grumman, in a presentation at CyberTalks in Washington, D.C.</p><p>Most cybersecurity technologies are all about reacting "to a breach, to a threat, to some event" based on something that we already know, such as a signature or pattern, Valentino explained. To be more resilient, organizations have to make a fundamental shift to being proactive in addressing cybersecurity threats.</p><p>One way to do this is by training employees about cyberthreats through awareness campaigns and even spear phishing testing, Valentino said. Northrop Grumman does this, and Valentino, even with a vast background of cyber experience, said he failed his first test.</p><p>Companies also need to engage in better information sharing and coordination with stakeholders to address cyber risks, the PwC survey found.</p><p>"Only 58 percent of respondents say they formally collaborate with others in their industry, including competitors, to improve security and reduce the potential for future risks," the survey said. "Trusted, timely, actionable information about cyber threats is a critical enabler for rapid-response capabilities that support resilience. Across organizations, sectors, countries, and regions, building the capability to withstand cyber shocks is a team effort, the effectiveness of which will be diminished without greater and more significant participation."</p><p>Healthcare institutions, for instance, have been reluctant to share cyberthreat indicator information due to fears that regulators might come after them, said Christopher Wlaschin, CISO at the U.S. Department of Health and Human Services (HHS), at CyberTalks. </p><p>Some larger institutions that are sharing information are doing so through automated methods, but Wlaschin said most of the healthcare industry in the United States is not capable of machine speed sharing at this point because it lacks both the funding and the staff.</p><p>Because of this, HHS is working with Information Sharing and Analysis Centers (ISACs) to make shared information as meaningful as possible for those who choose to participate—especially in small, medium, and rural settings, Wlaschin explained. </p><p>The "collective awareness and preparedness of the healthcare sector relies on information sharing," he said.</p><p>Lessons. Leaders from all sectors must work together to test cyber dependency and interconnectivity risks, and address accountability, liability, responsibility, consequence management, and norms, the PwC survey said. </p><p>To do this, the survey suggests that leaders take advantage of resources that offer insights into these issues, such as disaster response case studies, the National Association of Corporate Directors' 2017 Cyber Risk Oversight Handbook, and emerging guidelines from the Information Sharing and Analysis Organization standards body.</p><p>The survey also recommends leaders look at emerging research to learn lessons on how to increase resiliency. For instance, the U.S. Department of Energy awarded $20 million to its National Laboratories and partners to develop cybersecurity tools to increase resilience and risk management of the U.S. electric grid and oil and gas infrastructure.</p><p>Testing. When patching a system, IT professionals typically engage in a testing period to make sure it works and to see what the patch's effect will be on the network. Industry sectors need to take the same approach to cybersecurity to boost resiliency.</p><p>"All key industry sectors across the world would do well to stress-test their interdependencies with simulated cyberattack scenarios designed to inform risk management," PwC found. "Dan Geer, CISO at In-Q-Tel, has advocated developing cybersecurity stress test scenarios aimed at answering the following question: 'Can I withstand the failure of others on whom I depend?'"</p><p>Some sectors are already conducting these tests, such as the North American energy sector in its biennial GridEx exercise which simulates cyber and physical attacks on the electric grid, but more can be done to see how a widespread cyberattack would impact the sector—and others.</p><p>"Case studies of non-cyber disasters have shown that cascading events often begin with the loss of power—</p><p>and many systems are impacted instantaneously or within one day, meaning there is generally precious little time to address the initial problem before it cascades," the survey said. "Interdependencies between critical and non-critical networks often go unnoticed until trouble strikes." ​</p>

Cybercrime

 

 

https://sm.asisonline.org/Pages/Held-Hostage-.aspx2017-12-01T05:00:00ZHeld Hostage
https://sm.asisonline.org/Pages/An-Identity-Crisis.aspx2017-12-01T05:00:00ZAn Identity Crisis
https://sm.asisonline.org/Pages/Cutting-Edge-Criminals.aspx2017-12-01T05:00:00ZCutting-Edge Criminals
https://sm.asisonline.org/Pages/Driving-the-Business.aspx2017-10-01T04:00:00ZDriving the Business
https://sm.asisonline.org/Pages/Klososky-Opines-on-the-Future-of-Technology.aspx2017-09-27T04:00:00ZKlososky Opines on the Future of Technology
https://sm.asisonline.org/Pages/Hackers-Hit-Equifax,-Compromising-143-Million-Americans’-Data.aspx2017-09-08T04:00:00ZHackers Hit Equifax, Compromising 143 Million Americans’ Data
https://sm.asisonline.org/Pages/Data-Breach-Trends.aspx2017-08-01T04:00:00ZData Breach Trends
https://sm.asisonline.org/Pages/Book-Review---Data-Hiding.aspx2017-08-01T04:00:00ZBook Review: Data Hiding
https://sm.asisonline.org/Pages/Vulnerability-Rediscovery-Occurs-At-More-Than-Twice-The-Previously-Reported-Rate.aspx2017-07-21T04:00:00ZVulnerability Rediscovery Occurs At More Than Twice The Previously Reported Rate
https://sm.asisonline.org/Pages/Business-Theft-and-Fraud--Detection-and-Prevention.aspx2017-07-17T04:00:00ZBook Review - Business Theft and Fraud: Detection and Prevention
https://sm.asisonline.org/Pages/Survey-Of-InfoSec-Professionals-Paints-A-Dark-Picture-Of-Cyber-Defenses.aspx2017-07-07T04:00:00ZSurvey Of InfoSec Professionals Paints A Dark Picture Of Cyber Defenses
https://sm.asisonline.org/Pages/Ukraine-Among-Countries-Affected-by-Petya-Ransomware-Attack-.aspx2017-06-27T04:00:00ZUkraine Among Countries Affected by Petya Ransomware Attack
https://sm.asisonline.org/Pages/Average-Cost-of-Data-Breach-Declines-Globally-First-Time.aspx2017-06-20T04:00:00ZAverage Cost of Data Breach Declines Globally for First Time
https://sm.asisonline.org/Pages/EU-Needs-Comprehensive-Strategy-To-Address-Cybersecurity-Risks,-Think-Tank-Finds.aspx2017-06-09T04:00:00ZEU Needs Comprehensive Strategy To Address Cybersecurity Risks, Think Tank Finds
https://sm.asisonline.org/Pages/Most-Companies-Take-More-Than-A-Month-To-Detect-Cyberattackers.aspx2017-06-02T04:00:00ZMost Companies Take More Than A Month To Detect Cyberattackers
https://sm.asisonline.org/Pages/Hacking-Culture.aspx2017-06-01T04:00:00ZHacking Culture
https://sm.asisonline.org/Pages/IT-Security-Professionals-Admit-To-Hiding-Data-Breaches,-Survey-Finds--.aspx2017-05-09T04:00:00ZIT Security Professionals Admit To Hiding Data Breaches in New Survey
https://sm.asisonline.org/Pages/Cyber-War-Games.aspx2017-04-01T04:00:00ZCyber War Games
https://sm.asisonline.org/Pages/Outdated-Protocols-and-Practices-Put-the-IoT-Revolution-at-Risk.aspx2017-03-24T04:00:00ZOutdated Protocols and Practices Put the IoT Revolution at Risk
https://sm.asisonline.org/Pages/Hacked-Again.aspx2017-02-01T05:00:00ZBook Review: Hacked Again

 You May Also Like...

 

 

https://sm.asisonline.org/Pages/The-Unique-Threat-of-Insiders.aspxThe Unique Threat of Insiders<p>​It’s perhaps the most infamous incident of an insider threat in modern times. During the spring and summer of 2013, then-National Security Agency (NSA) contractor and Sharepoint administrator Edward Snowden downloaded thousands of documents about the NSA’s telephone metadata mass surveillance program onto USB drives, booked a flight to Hong Kong, and leaked those documents to the media.</p><p>An international manhunt was launched, Snowden fled to Moscow, hearings were held in the U.S. Congress, and new policies were created to prevent another insider breach. The damage a trusted insider can do to an organization became painfully obvious.</p><p>“If you’d asked me in the spring of 2013…what’s the state of your defense of the business proposition as it validates the technology, people, and procedures? I would have said, ‘Good. Not perfect,’” said Chris Inglis, former deputy director and senior civilian leader of the NSA during the Snowden leaks, in a presentation at the 2017 RSA Conference in San Francisco.</p><p>“I would have said that ‘we believe, given our origins and foundations, and folks from information assurance, that that’s a necessary accommodation,” he explained. “We make it such that this architecture—people, procedure, and technology—is defensible.”</p><p>Inglis also would have said that the NSA vetted insiders to ensure trustworthiness, gave them authority to conduct their jobs, and followed up with them if they exceeded that authority—intentionally or unintentionally—to remediate it. </p><p>“We made a critical mistake. We assumed that outsider external threats were different in kind than insider threats,” Inglis said. “My view today is they are exactly the same. All of those are the exercise of privilege.”</p><p>Inglis’ perspective mirrors similar findings from the recent SANS survey Defending Against the Wrong Enemy: 2017 Sans Insider Threat Survey by Eric Cole, SANS faculty fellow and former CTO of McAfee and chief scientist at Lockheed Martin.</p><p>The SANS survey of organizations with 100 to 100,000 employees found that it can be easy to conclude that external attacks should be the main focus for organizations. </p><p>“This conclusion would be wrong. The critical element is not the source of a threat, but its potential for damage,” Cole wrote. “Evaluating threats from that perspective, it becomes obvious that although most attacks might come from outside the organization, the most serious damage is done with help from the inside.”​</p><h4>Insider Threat Programs</h4><p>Incidents like the Snowden leaks and the more recent case of Harold Thomas Martin III, an NSA contractor accused of taking top secret information home with him, along with other incidents of economic espionage have raised awareness of the impact insider threats can have. However, many organizations have not adjusted their security posture to mitigate those threats.</p><p>In its survey, SANS found that organizations recognize insider threat as the “most potentially damaging component of their individual threat environments,” according to the survey. “Interestingly, there is little indication that most organizations have realigned budgets and staff to coincide with that recognition.”</p><p>Of the organizations surveyed, 49 percent said they are in the process of creating an insider threat program, but 31 percent still do not have a plan and are not addressing insider threats through such a plan. </p><p>“Unfortunately, organizations that lack effective insider threat programs are also unable to detect attacks in a timely manner, which makes the connection difficult to quantify,” SANS found. “From experience, however, there is a direct correlation between entities that ignore the problem and those that have major incidents.”</p><p>Additionally, because many are not monitoring for insider threats, most organizations claim that they have never experienced an insider threat. “More than 60 percent of the respondents claim they have never experienced an insider threat attack,” Cole wrote. “This result is very misleading. It is important to note that 38 percent of the respondents said they do not have effective ways to detect insider attacks, meaning the real problem may be that organizations are not properly detecting insider threats, not that they are not happening.”</p><p>The survey also found that the losses from insider threats are relatively unknown because they are not monitored or detected. Due to this, organizations cannot put losses from insider threats into financial terms and may not devote resources to addressing the issue, making it difficult or impossible to determine the cost of an insider attack.</p><p>For instance, an insider could steal intellectual property and product plans and sell them to a competitor without being detected.</p><p>“Subsequent failure of that product might be attributed to market conditions or other factors, rather than someone ‘stealing it,’” Cole wrote. “Many organizations, in my experience, are likely to blame external factors and only discover after detailed investigation that the true cause is linked back to an insider.”</p><p>And when organizations do discover that an insider attack has occurred, most have no formal internal incident response plan to address it.</p><p>“Despite recognition of insiders as a common and vulnerable point of attack, fewer than 20 percent of respondents reported having a formal incident response plan that deals with insider threat,” according to the SANS survey. </p><p>Instead, most incident response plans are focused on external threats, Cole wrote, which may explain why companies struggle to respond to insider threats.</p><p>Organizations are also struggling to deal with both malicious and accidental insider threats—a legitimate user whose credentials were stolen or who has been manipulated into giving an external attacker access to the organization. “Unintentional insider involvement can pose a greater risk, and considerably more damage, by allowing adversaries to sneak into a network undetected,” the survey found. “Lack of visibility and monitoring capability are possible explanations for the emphasis on malicious insiders.</p><p>To begin to address these vulnerabilities, SANS recommends that organizations identify their most critical data, determine who has access to that data, and restrict access to only those who need it. Then, organizations should focus on increasing visibility into users’ behavior to be proactive about insider threats. </p><p>“We were surprised to see 60 percent of respondents say they had not experienced an insider attack,” said Cole in a press release. “While the confidence is great, the rest of our survey data illustrates organizations are still not quite effective at proactively detecting insider threats, and that increased focus on individuals’ behaviors will result in better early detection and remediation.”​</p><h4>Trusted People</h4><p>When the NSA recruits and hires people, it vets them thoroughly to ensure their trustworthiness, according to Inglis.</p><p>“We ultimately want to bring some­body into the enterprise who we can trust, give them some authority to operate within an envelope that doesn’t monitor their tests item by item,” he explained. “Why? Because it’s within that envelope that they can exceed your expectations and the adversary’s expectations, your competitors’ expectations, and hope­fully the customers’ expectations. </p><p>You want them to be agile, creative, and innovative.”</p><p>To do this, the NSA would go to great lengths to find people with technical ability and possible trustworthiness. Then it or a third party would vet them, looking at their finances and their background, conducting interviews with people who knew them, and requiring polygraph examinations.</p><p>After the Snowden leaks, the U.S. federal government examined the work of its contract background screening firm—United States Investigations Services (USIS). USIS had cleared both Snowden and the Washington Navy Yard shooter Aaron Alexis. The government decided to reduce its contracted work with the company.</p><p>USIS later agreed to pay $30 million to settle U.S. federal fraud charges, forgoing payments that it was owed by the U.S. Office of Personnel Management for conducting background checks. The charges included carrying out a plot to “flush” or “dump” individual cases that it deemed to be low level to meet internal USIS goals, according to The Hill’s coverage of the case.</p><p>“Shortcuts taken by any company that we have entrusted to conduct background investigations of future and current federal employees are unacceptable,” said Benjamin Mizer, then head of the U.S. Department of Justice’s Civil Division, in a statement. “The Justice Department will ensure that those who do business with the government provide all of the services for which we bargained.”</p><p>This part of the process—vetting potential employees and conducting background checks—is where many private companies go wrong, according to Sandra Stibbards, owner and president of Camelot Investigations and chair of the ASIS International Investigations Council.</p><p>“What I’ve come across many times is companies are not doing thorough backgrounds, even if they think they are doing a background check—they are not doing it properly,” she says. </p><p>For instance, many companies will hire a background screening agency to do a check on a prospective employee. The agency, Stibbards says, will often say it’s doing a national criminal search when really it’s just running a name through a database that has access to U.S. state and county criminal and court records that are online.</p><p>“But the majority of counties and states don’t have their criminal records accessible online,” she adds. “To really be aware of the people that you’re getting and the problem with the human element, you need to have somebody who specializes and you need to…invest the money in doing proper background checks.”</p><p>To do this, a company should have prospective employees sign a waiver that informs them that it will be conducting a background check on them. This check, Stibbards says, should involve looking at criminal records in every county and state the individual has lived in, many of which will need to be visited in person.</p><p>She also recommends looking into any excessive federal court filings the prospective employee may have made.</p><p>“I’ll look for civil litigation, especially in the federal court because you get people that are listed as a plaintiff and they are filing suits against companies for civil rights discrimination, or something like that, so they can burn the company and get money out of it,” Stibbards adds.</p><p>Additionally, Stibbards suggests looking for judgments, tax liens, and bankruptcies, because that gives her perspective on whether a person is reliable and dependable.</p><p>“It’s not necessarily a case break­er, but you want to have the full perspect­ive of if this person is capable of managing themselves, because if they are not capable of managing themselves, they may not make the greatest employee,” she says.</p><p>Companies should ensure that their background screenings also investigate the publicly available social media presence of potential employees. Companies can include information about this part of the process in the waiver that applicants sign agreeing to a background check to avoid legal complications later on. </p><p>“I’m going to be going online to see if I see chatter about them, or if they chat a lot, make comments on posts that maybe are inappropriate, if they maintain Facebook, LinkedIn, and Twitter,” Stibbards says. </p><p>Posting frequently to social media might be a red flag. “If you find somebody on Facebook that’s posting seven, eight, nine, or 10 times a day, this is a trigger point because social media is more important to them than anything else they are doing,” Stibbards adds.</p><p>And just because a prospective employee is hired doesn’t mean that the company should discontinue monitoring his or her social media. While ongoing review is typically a routine measure, it can lead to disciplinary action for an employee who made it through the initial vetting process. For instance, Stibbards was hired by a firm to investigate an employee after the company had some misgivings about certain behaviors.</p><p>“Not only did we find criminal records that weren’t reported, but we then found social media that indicated that the employee was basically a gang member—pictures of guns and the whole bit,” Stibbards says.</p><p>It’s also critical, once a new employee has been brought on board, to introduce him or her to the culture of the organization—an aspect that was missing in Snowden’s onboarding process, Inglis said. This is because, as a contractor working for the NSA, regulations prohibited the U.S. government from training him. </p><p>“You show up as a commodity on whatever day you show up, and you’re supposed to sit down, do your work—sit down, shut up, and color within the lines,” Inglis explained.</p><p>So on Snowden’s first day at the NSA, he was not taken to the NSA Museum like other employees and taught about the agency’s history, the meaning of the oath new employees take, and the contributions the NSA makes to the United States.</p><p>“Hopefully there are no dry eyes at that moment in time, having had a history lesson laying out the sense of the vitality and importance of this organization going forward,” Inglis explained. “We don’t do that with contractors. We just assume that they already got that lesson.”</p><p>If companies fail to introduce contractors and other employees to the mission of the organization and its culture, those employees will not feel that they are part of the organization.​</p><h4>Trusted Technology</h4><p>Once trusted people are onboarded, companies need to evaluate their data—who has access to it, what controls are placed on it to prevent unwarranted access, and how that access is monitored across the network.</p><p>“The one thing I always recommend to any company is to have a monitoring system for all of their networks; that is one of the biggest ways to avoid having issues,” Stibbards says. “Whether it’s five people working for you or 100, if you let everybody know and they are aware when they are hired that all systems—whether they are laptops or whatever on the network—are all monitored by the company, then you have a much better chance of them not doing anything inappropriate or…taking information.”</p><p>These systems can be set up to flag when certain data is accessed or if an unusual file type is emailed out of the network to another address. </p><p>Simon Gibson, fellow security architect at Gigamon and former CISO at Bloomberg LP, had a system like this set up at Bloomberg, which alerted security staff to an email sent out with an Adobe PDF of an executive’s signature.</p><p>“He’s a guy who could write a check for a few billion dollars,” Gibson explains. “His signature was detected in an email being sent in an Adobe PDF, and it was just his signature…of course the only reason you would do that is to forge it, right?”</p><p>So, the security team alerted the business unit to the potential fraud. But after a quick discussion, the team found that the executive’s signature was being sent by a contractor to create welcome letters for new employees.</p><p>“From an insider perspective, we didn’t know if this was good or bad,” Gibson says. “We just knew that this guy’s signature probably ought not be flying in an email unless there’s a really good reason for it.”</p><p>Thankfully, Bloomberg had a system designed to detect when that kind of activity was taking place in its network and was able to quickly determine whether it was malicious. Not all companies are in the same position, says Brian Vecci, technical evangelist at Varonis, an enterprise data security provider.</p><p>In his role as a security advocate, Vecci goes out to companies and conducts risk assessments to look at what kinds of sensitive data they have. Forty-seven percent of companies he’s looked at have had more than 1,000 sensitive data files that were open to everyone on their network. “I think 22 percent had more than 10,000 or 12,000 files that were open to everybody,” Vecci explains. “The controls are just broken because there’s so much data and it’s so complex.”</p><p>To begin to address the problem, companies need to identify what their most sensitive data is and do a risk assessment to understand what level of risk the organization is exposed to. “You can’t put a plan into place for reducing risk unless you know what you’ve got, where it is, and start to put some metrics or get your arms around what is the risk associated to this data,” Vecci says. </p><p>Then, companies need to evaluate who should have access to what kinds of data, and create controls to enforce that level of access. </p><p>This is one area that allowed Snowden to gain access to the thousands of documents that he was then able to leak. Snowden was a Sharepoint administrator who populated a server so thousands of analysts could use that information to chase threats. His job was to understand how the NSA collects, processes, stores, queries, and produces information.</p><p>“That’s a pretty rich, dangerous set of information, which we now know,” Inglis said. “And the controls were relatively low on that—not missing—but low because we wanted that crowd to run at that speed, to exceed their expectations.”</p><p>Following the leaks, the NSA realized that it needed to place more controls on data access because, while a major leak like Snowden’s had a low probability of happening, when it did happen the consequences were extremely high. </p><p>“Is performance less sufficient than it was before these maneuvers? Absolutely,” Inglis explained. “But is it a necessary alignment of those two great goods—trust and capability? Absolutely.”</p><p>Additionally, companies should have a system in place to monitor employees’ physical access at work to detect anomalies in behavior. For instance, if a system administrator who normally comes to work at 8:00 a.m. and leaves at 5:00 p.m. every day, suddenly comes into the office at 2:00 a.m. or shows up at a workplace with a data storage unit that’s not in his normal rotation, his activity should be a red flag.</p><p>“That ought to be a clue, but if you’re not connecting the dots, you’re going to miss that,” Inglis said.  ​</p><h4>Trusted Processes</h4><p>To truly enable the technology in place to monitor network traffic, however, companies need to have processes to respond to anomalies. This is especially critical because often the security team is not completely aware of what business units in the company are doing, Gibson says.</p><p>While at Bloomberg, his team would occasionally get alerts that someone had sent software—such as a document marked confidential—to a private email address. “When the alert would fire, it would hit the security team’s office and my team would be the first people to open it and look at it and try analyze it,” Gibson explains. “The problem is, the security team has no way of knowing what’s proprietary and valuable, and what isn’t.”</p><p>To gather this information, the security team needs to have a healthy relationship with the rest of the organization, so it can reach out to others in the company—when necessary—to quickly determine if an alert is a true threat or legitimate business, like the signature email. </p><p>Companies also need to have a process in place to determine when an employee uses his or her credentials to inappropriately access data on the network, or whether those credentials were compromised and used by a malicious actor. </p><p>Gibson says this is one of the main threats he examines at Gigamon from an insider threat perspective because most attacks are carried out using people’s credentials. “For the most part, on the network, everything looks like an insider threat,” he adds. “Take our IT administrator—someone used his username and password to login to a domain controller and steal some data…I’m not looking at the action taken on the network, which may or may not be a bad thing, I’m actually looking to decide, are these credentials being used properly?”</p><p>The security team also needs to work with the human resources department to be aware of potential problem employees who might have exceptional access to corporate data, such as a system administrator like Snowden.</p><p>For instance, Inglis said that Snowden was involved in a workplace incident that might have changed the way he felt about his work at the NSA. As a systems administrator with incredible access to the NSA’s systems, Inglis said it would have made sense to put a closer watch on him after that incident in 2012, because the consequences if Snowden attacked the NSA’s network were high.</p><p>“You cannot treat HR, information technology, and physical systems as three discrete domains that are not somehow connected,” Inglis said.</p><p>Taking all of these actions to ensure that companies are hiring trusted people, using network monitoring technology, and using procedures to respond to alerts, can help prevent insider threats. But, as Inglis knows, there is no guarantee.</p><p>“Hindsight is 20/20. You have to look and say, ‘Would I theoretically catch the nuances from this?’”   ​</p>GP0|#28ae3eb9-d865-484b-ac9f-3dfacb4ce997;L0|#028ae3eb9-d865-484b-ac9f-3dfacb4ce997|Strategic Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465
https://sm.asisonline.org/Pages/Hacked-Again.aspxBook Review: Hacked Again<p>​ScottSchober.com Publishing; ScottSchober.com, 202 pages; $34.95</p><p>If you are seeking useful security advice on how to mitigate or prevent cybersecurity breaches, <em>Hacked Again</em> is a good resource to have in your library. </p><p>Author Scott Schober, a business owner and wireless technology expert, discusses pitfalls that all businesses face and the strategies used to mitigate cyberattacks. He discusses malware, email scams, identity theft, social engineering, passwords, and the Dark Web. </p><p> Another important concept is having systems in place to enable information access both as the data breach is occurring and afterwards. Most companies’ IT departments will have an incident response team; however, the individual user needs to know what to do when breached. Schober offers advice for that. </p><p> The abundance of personal information on social media is another concern of the author’s. He states that we are twice as likely to be victims of identity theft from these sites. He also reminds us that no matter how we try to eliminate risk, we’re never completely protected from a cyberattack. </p><p> Many cybersecurity books are more advanced, but Schober’s style is easy to follow, and he explains concepts and theories without confusing the reader. When concepts become overly technical, he incorporates scenarios to explain what these technical terms mean. Students, IT professionals, and novices would benefit from this book. They will learn that everyone must be aware of cybersecurity and stay on top of evolving trends.</p><p>--</p><p><em><strong>Reviewer: Kevin Cassidy</strong> is a professor in the security, fire, and emergency management department at John Jay College of Criminal Justice. He is a member of ASIS.</em><br></p>GP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a4344465
https://sm.asisonline.org/Pages/Rise-of-the-IoT-Botnets.aspxRise of the IoT Botnets<p>​There are many doomsday cyber scenarios that keep security professionals awake at night. Vint Cerf, one of the fathers of the Internet and current vice president and chief Internet evangelist for Google, speaking at an event in Washington, D.C., in 2015, shared his: waking up to the headline that 1,000 smart refrigerators were used to conduct a distributed denial of service (DDoS) attack to take down a critical piece of U.S. infrastructure.</p><p>Cerf’s nightmare scenario hasn’t happened, yet. But in 2016 thousands of compromised surveillance cameras and DVRs were used in a DDoS attack against domain name server provider Dyn to take down major websites on the East Coast of the United States. It was a massive Internet outage and, for many, a true wake-up call.</p><p> At approximately 7:00 a.m. on October 21, Dyn was hit by a DDoS attack, and it quickly became clear that this attack was different from the DDoS attacks the company had seen before. </p><p>It was targeting all of Dyn’s 18 data centers throughout the world, disrupting tens of millions of Internet Protocol (IP) addresses, and resulting in outages to millions of brand-name Internet services, including Twitter, Amazon, Spotify, and Netflix.</p><p>Two hours later, Dyn’s Network Operations Center (NOC) team mitigated the attack and restored service to its customers. </p><p>“Unfortunately, during that time, Internet users directed to Dyn servers on the East Coast of the United States were unable to reach some of our customers’ sites, including some of the marquee brands of the Internet,” Dyn Chief Strategy Officer Kyle York wrote in a statement for the company. </p><p>A second attack then hit Dyn several hours later. Dyn mitigated the attack in just over an hour, and some customers experienced extended latency delays during that time. A third wave of attacks hit Dyn, but it successfully mitigated the attack without affecting customers.</p><p>“Dyn’s operations and security teams initiated our mitigation and customer communications process through our incident management system,” York explained. “We practice and prepare for scenarios like this on a regular basis, and we run constantly evolving playbooks and work with mitigation partners to address scenarios like this.”</p><p>The attacks caused an estimated lost revenue and sales of up to $110 million, according to a letter by U.S. Representative Bennie G. Thompson (D-MS) sent to former U.S. Department of Homeland Security (DHS) Secretary Jeh Johnson.</p><p>“While DDoS attacks are not uncommon, these attacks were unprecedented not only insofar as they appear to have been executed through malware exploiting tens of thousands of Internet of Things (IoT) devices, but also because they were carried out against a firm that provides services that, by all accounts, are essential to the operation of the Internet,” the letter explained.</p><p>These devices were part of the Mirai botnet, which is made up of at least 500,000 IoT devices, including DVRs and surveillance cameras, that are known to be in China, Hong Kong, Macau, Vietnam, Taiwan, South Korea, Thailand, Indonesia, Brazil, and Spain, among other nations.</p><p>The botnet, which was created in 2016, has been used to conduct high-profile, high-impact DDoS attacks, including the attack on security researcher Brian Krebs’ website, Krebs on Security—one of the largest DDoS attacks known to date. </p><p>“Mirai serves as the basis of an ongoing DDoS-for-hire…service, which allows attackers to launch DDoS attacks against the targets of their choice in exchange for monetary compensation, generally in the form of Bitcoin payments,” according to Arbor Networks’s Security Engineering and Response Team (ASERT) threat intelligence report on Mirai. “While the original Mirai botnet is still in active use as of this writing, multiple threat actors have been observed customizing and improving the attack capabilities of the original botnet code, and additional Mirai-based DDoS botnets have been observed in the wild.”</p><p>This is because shortly after the Dyn attack, Mirai’s source code was published on the Internet, and “everyone and their dog tried to get their hands on it and run it in some form or another,” says Javvad Malik, a security advocate at AlienVault, a cybersecurity management provider.</p><p>Mirai is “out there and the problem is, there isn’t any easy mitigation against it,” Malik explains. “A camera or a webcam, there’s no real, easy way to patch it or update it, or there’s no non-technical way your average user could patch it. And most users aren’t even aware that their device was part of the attack.”</p><p>There are more than 25 billion connected devices in use worldwide now, and that amount is expected to increase to 50 billion by 2020 as consumer goods companies, auto manufacturers, healthcare providers, and other businesses invest in IoT devices, according to the U.S. Federal Trade Commission.</p><p>But many of the devices already on the market are not designed with security in mind. Many do not allow consumers to change default passwords on the devices or patch them to prevent vulnerabilities.</p><p>The Mirai botnet—and others like it—take advantage of these insecurities in IoT devices. Mirai constantly scans devices for vulnerabilities and then introduces malware to compromise them. Once compromised, those devices scan others and the cycle continues. These devices can then be used by an attacker to launch DDoS attacks, like the one on Dyn.</p><p>Some manufacturers have sought to remedy vulnerabilities in their devices by issuing voluntary recalls when they discover that they’ve been used in a botnet attack. But for many other manufacturers, there’s not enough incentive to address the problem and most consumers are unaware of the issue, says Gary Sockrider, principal security technologist at Arbor Networks.</p><p>“Consumers are largely unaware. Their devices may be compromised and taking part in a botnet, and most consumers are completely oblivious to that,” he explains. “They don’t even know how to go about checking to see if they have a problem, nor do they have a lot of motivation unless it’s affecting their Internet connection.”</p><p>DHS and the U.S. National Institute of Standards and Technology (NIST) both recently released guidance on developing IoT devices and systems with security built in. In fact, NIST accelerated the release of its guidance—Special Publication 800-160—in response to the Dyn attack.</p><p>But some experts say more than guidance is needed. Instead, they say that regulations are needed to require IoT devices to allow default passwords to be changed, to be patchable, and to have support from their manufacturers through a designated end-of-life time period.</p><p>“The market can’t fix this,” said Bruce Schneier, fellow of the Berkman Klein Center at Harvard University, in a congressional hearing on the Dyn attack. “The buyer and seller don’t care…so I argue that government needs to get involved. That this is a market failure. And what I need are some good regulations.”</p><p>However, regulations may not solve the problem. If the United States, for instance, issues regulations, they would apply only to future devices that are made and sold in the United States. And regulations can have other impacts, Sockrider cautions.</p><p>“It’s difficult to craft legislation that can foresee potential problems or vulnerabilities,” he explains. “If you make it vague enough, it’s hard to enforce compliance. And if you make it too specific, then it may not have the desired effect.”</p><p>Regulations can also drive up cost and hinder development if they are not designed to foster innovation. “Compliance does not equal security, necessarily,” Sockrider says. “Part of compliance may mean doing things to secure your products and services and networks, but there could always be vulnerabilities that aren’t covered…. You’ve got to be careful that you’re covering beyond just compliance and getting to true security as much as possible.” </p><p>So, what steps should organizations take in the meantime to reduce the risk of their devices being compromised and used to launch attacks on innocent parties?</p><p>If a company already has IoT devices, such as security cameras or access control card readers, in its facilities, the first step is segmentation, says Morey Haber, vice president of technology for security vendor BeyondTrust. </p><p>“Get them off your main network,” he adds. “Keep them on a completely isolated network and control access to them; that’s the best recourse.”</p><p>If the organization can’t do that and it’s in a highly regulated environment, such as a financial firm subject to PCI compliance, it should replace the devices and reinstall them on a segmented network, Haber says.</p><p>Organizations should also change all default user accounts and passwords for IoT devices, Sockrider says. “Disable them if possible. If you can’t, then change them. If you can’t change them, then block them.”</p><p>For organizations that are looking to install IoT devices, Haber says they should plan to install them on a segmented network and ask integrators about the security of the devices. </p><p>Sample questions include: Do they maintain a service level agreement for critical vulnerabilities? What is the lifespan of the device? How often will patches be released? </p><p>“And the last thing that becomes even more critical: What is the procedure for updating?” Haber says. “Because if you have to physically go to each one and stick an SD card in with a binary to do the upload, that’s unfeasible if you’re buying thousands of cameras to distribute to your retail stores worldwide. There’s no way of doing that.”</p><p>Organizations should also look at their policies around allowing employees to bring in their own devices to the workplace and allowing them to connect to the network. </p><p>For instance, employers should be wary when an employee who brings in a new toaster connects it to the company Wi-Fi without anyone else’s knowledge. “That type of Shadow IT using IoT devices is where the high risk comes from,” Haber explains. </p><p>And organizations should also look to see what they can do to block inside traffic from their network getting out. </p><p>“Think about it in the reverse; normally we’re trying to keep bad stuff out of our network, but in this case, we want to keep the bad stuff from leaving our network,” Sockrider says. “Because in this case, if an IoT device on your network is compromised, it’s not necessarily trying to attack you, it’s trying to attack someone else and you can be a good citizen by blocking that outbound traffic and preventing it from doing so.”</p><p>While companies can take steps to reduce the likelihood that their devices will be compromised by a botnet and used to attack others, attacks—like the Dyn attack—are likely to continue, Malik says.</p><p>“We’ll probably only see more creative ways of these attacks going forward,” he explains. “At the moment, it’s primarily the webcams and DVRs, but you’re probably going to see different attacks that are more tailored towards specific devices and maybe even a change of tactics. Instead of going after Dyn…taking down a smaller competitor.”</p><p>Malik also says he anticipates that cyber criminals will conduct these more creative attacks through purchasing DDoS as a service, a growing industry over the past few years. </p><p>“Some providers are just as good, if not better than, professional legitimate services,” Malik says. “It’s very easy; they offer support. You just go there, you click buy, send the Bitcoins, enter your target, and job done. You don’t even need any technical expertise to do this. It’s very, very convenient.”   ​ ​</p>GP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a4344465