Social Engineering

 

 

https://sm.asisonline.org/Pages/The-Internet-And-The-Future-of-Online-Trust.aspxThe Internet And The Future of Online TrustGP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a43444652017-08-11T04:00:00Zhttps://adminsm.asisonline.org/pages/megan-gates.aspx, Megan Gates<p>​How will online trust change over the next decade? That was the focus of a new<a href="http://www.pewinternet.org/2017/08/10/the-fate-of-online-trust-in-the-next-decade/#vinton-cerf" target="_blank"> nonscientific canvassing of 1,233 individuals</a> by the Pew Research Center and Elon University’s Imagining the Internet Center, which found that most experts think “lack of trust” won’t be a barrier to society’s reliance on the Internet.​</p><p>The survey partners asked 1,233 individuals, including technologists, scholars, practitioners, strategic thinkers, and other leaders: “Will people’s trust in their online interactions, their work, shopping, social connections, pursuit of knowledge, and other activities be strengthened or diminished over the next 10 years?”</p><p>Forty-eight percent of respondents said they think online trust will be strengthened, 28 percent reported that trust will remain the same, and just 24 percent said trust will be diminished. </p><p>“Many of these respondents made references to changes now being implemented or being considered to enhance the online trust environment,” according to Pew. “They mentioned the spread of encryption, better online identity-verification systems, tighter security standards in Internet protocols, new laws and regulations, new techno-social systems like crowdsourcing and up-voting/down-voting, or challenging online content.”</p><p>For instance, Adrian Hope-Bailie, standards officer at blockchain solution provider Ripple, participated in the survey and said technology advancements are bringing together disparate but related fields, like finance, health care, education, and politics.</p><p>“It’s only a matter of time before some standards emerge that bind the ideas of identity and personal information with these verticals such that it becomes possible to share and exchange key information, as required, and with consent to facilitate much stronger trusted relationships between users and their service providers,” Hope-Bailie explained.</p><p>One technology that respondents were asked about in particular was blockchain and the role it might play in fostering trust on the Internet. Blockchain is a digital ledger system that is encryption-protected and used to facilitate validated transactions and interactions that cannot be edited.</p><p>Other experts, however, were less optimistic about the future of trust in online interactions. Vinton Cerf, vice president and chief Internet evangelist at Google, and co-inventor of the Internet Protocol, participated in the survey and said that trust is “leaking” out of the Internet.</p><p>“Unless we strengthen the ability of content and service suppliers to protect users and their information, trust will continue to erode,” he explained. “Strong authentication to counter hijacking of accounts is vital.”</p><p>Overall, the survey found six major themes on the future of trust in online interactions:</p><div><ol><li><p>Trust will strengthen because systems will improve and people will adapt to them and more broadly embrace them.<br></p></li><li><p>The nature of trust will become more fluid​ as technology embeds itself into human and organizational relationships.<br></p></li><li><p>Trust will not grow, but technology usage will continue to rise, as a “new normal” sets in.<br></p></li><li><p>Some say blockchain could help; some expect its value might be limited.<br></p></li><li><p>The less-than-satisfying current situation will not change much in the next decade.<br></p></li><li><p>Trust will diminish because the Internet is not secure, and powerful forces threaten individuals’ rights.<br></p></li></ol></div><p><br></p>

Social Engineering

 

 

https://sm.asisonline.org/Pages/The-Internet-And-The-Future-of-Online-Trust.aspx2017-08-11T04:00:00ZThe Internet And The Future of Online Trust
https://sm.asisonline.org/Pages/DHS-Official-Says-Russia-Tried-to-Hack-21-States-in-2016-Election.aspx2017-06-21T04:00:00ZDHS Official Says Russia Tried to Hack 21 States in 2016 Election
https://sm.asisonline.org/Pages/Most-U.S.-Hospitals-Have-Not-Deployed-DMARC-To-Protect-Their-Email-Systems.aspx2017-06-16T04:00:00ZMost U.S. Hospitals Have Not Deployed DMARC To Protect Their Email Systems
https://sm.asisonline.org/Pages/Book-Review---Social-Media-Risk-and-Governance.aspx2016-11-01T04:00:00ZBook Review: Social Media Risk and Governance
https://sm.asisonline.org/Pages/Top-5-Hacks-From-Mr.-Robot.aspx2016-10-21T04:00:00ZThe Top Five Hacks From Mr. Robot—And How You Can Prevent Them
https://sm.asisonline.org/Pages/Spoofing-the-CEO.aspx2016-10-01T04:00:00ZSpoofing the CEO
https://sm.asisonline.org/Pages/Book-Review---Cybervetting.aspx2016-05-01T04:00:00ZBook Review: Cybervetting
https://sm.asisonline.org/Pages/How-to-Protect-PII.aspx2016-02-16T05:00:00ZHow to Protect PII
https://sm.asisonline.org/Pages/Smart-and-Secure.aspx2016-01-19T05:00:00ZSmart and Secure
https://sm.asisonline.org/Pages/Book-Review---Social-Crime.aspx2016-01-04T05:00:00ZBook Review: Social Crime
https://sm.asisonline.org/Pages/Book-Review---Online-Risk.aspx2015-12-01T05:00:00ZBook Review: Online Risk
https://sm.asisonline.org/Pages/La-Revolución-del-Internet-de-las-Cosas.aspx2015-11-12T05:00:00ZLa Revolución del Internet de las Cosas
https://sm.asisonline.org/Pages/The-IOT-Revolution.aspx2015-10-26T04:00:00ZThe IOT Revolution
https://sm.asisonline.org/Pages/Teach-a-Man-to-Phish.aspx2015-09-09T04:00:00ZTeach a Man to Phish
https://sm.asisonline.org/Pages/Communication-in-Crisis.aspx2015-09-01T04:00:00ZCommunication in Crisis
https://sm.asisonline.org/Pages/Ediscovery-and-the-Security-Implications-of-the-Internet-of-Things.aspx2015-04-13T04:00:00ZEdiscovery and the Security Implications of the Internet of Things
https://sm.asisonline.org/Pages/The-New-Recruits.aspx2015-04-01T04:00:00ZThe New Recruits
https://sm.asisonline.org/Pages/The-Lone-Terrorist.aspx2015-03-01T05:00:00ZThe Lone Terrorist
https://sm.asisonline.org/Pages/Big-Answers.aspx2014-12-01T05:00:00ZBig Answers
https://sm.asisonline.org/Pages/Analytics-for-Everyone.aspx2014-11-01T04:00:00ZAnalytics for Everyone

 You May Also Like...

 

 

https://sm.asisonline.org/Pages/The-Zero-Day-Problem.aspxThe Zero Day Problem<p>​In August 2017, FireEye released new threat research confirming with “moderate confidence” that the Russian hacking group APT28, also known as FancyBear, was using an exploit to install malware on hotel networks that then spread laterally to target travelers. </p><p>“Once inside the network of a hospitality company, APT28 sought out machines that controlled both guest and internal Wi-Fi networks,” FireEye said in a blog post. “No guest credentials were observed being stolen at the compromised hotels; however, in a separate incident that occurred in fall 2016, APT28 gained initial access to a victim’s network via credentials likely stolen from a hotel Wi-Fi network.”</p><p>After APT28 accessed corporate and guest machines connected to the hotel Wi-Fi networks, it deployed a malware that then sent the victims’ usernames and hashed passwords to APT28-controlled machines.</p><p>“APT28 used this technique to steal usernames and hashed passwords that allowed escalation of privileges in the victim network,” FireEye explained. </p><p>This new method is worrisome for security experts because the exploit APT28 was using to infiltrate hotel networks in the first place was EternalBlue, the same vulnerability used to spread ransomware such as WannaCry and NotPetya. It was also allegedly stolen from the U.S. National Security Agency (NSA).</p><p>A group of hackers, dubbed the Shadow Brokers, posted the EternalBlue exploit online in April 2017 after claiming to have stolen it from the NSA. The leak was just one of many the group has made over the past year detailing NSA vulnerabilities that exploited Cisco Systems, Microsoft products, and others. </p><p>The leaks prompted renewed debate on whether the NSA should change its vulnerabilities equities process (VEP) to disclose cyber vulnerabilities to the private sector more frequently to prevent future cyberattacks.</p><p>Some of the harshest criticism came from Microsoft itself. In a blog post, President and Chief Legal Officer Brad Smith wrote that the WannaCry attack provided an example of why “stockpiling of vulnerabilities by governments” is a problem.</p><p>“An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen,” Smith explained. “And this most recent attack represents a completely unintended but disconcerting link between the two most serious forms of cybersecurity threats in the world—nation-state action and organized criminal action.”</p><p>The VEP began to take form under the George W. Bush administration when then President Bush issued a directive instructing the director of national intelligence, the attorney general, and the secretaries of state, defense, and homeland security to create a “joint plan for the coordination and application of offensive capabilities to defend U.S. information systems.”</p><p>Based on this directive, the respective agencies recommended that the government create a VEP to coordinate the government’s “offensive and defensive mission interests,” according to a memo by the Congressional Research Service (CRS) in February 2017. </p><p>The Obama administration then created the current VEP, which became publicly known in 2014 in response to the Heartbleed vulnerability—a bug in the OpenSSL cryptographic software that allowed protected information to be compromised. </p><p>The VEP, as it is known to exist today, provides the process for how the U.S. government chooses whether to disclose vulnerabilities to the vendor community or retain those vulnerabilities for its own use.</p><p>“Vulnerabilities for this purpose may include software vulnerabilities (such as a flaw in the software which allows unauthorized code to run on a machine) or hardware vulnerabilities (such as a flaw in the design of a circuit board which allows an unauthorized party to determine the process running on the machine),” according to the CRS memo sent to U.S. Representative Ted Lieu (D-CA).</p><p>To be eligible for the VEP, however, a vulnerability must be new or not known to others. Vulnerabilities are referenced against the Common Vulnerabilities and Exposures Database to determine if they are new or unknown.</p><p>When choosing to disclose a vulnerability, there are no clear rules but the U.S. government considers several factors, according to a blog post by former White House Cybersecurity Coordinator Michael Daniel that was written in response to allegations that the NSA knew about the Heartbleed vulnerability prior to its disclosure online.</p><p>For instance, the government considers the extent of the vulnerable system’s use in the Internet’s infrastructure, the risks and harm that could be done if the vulnerability is not patched, whether the administration would know if another organization is exploiting the vulnerability, and whether the vulnerability is needed for the collection of intelligence.</p><p>The government also considers how likely it is that the vulnerability will be discovered by others, if the government can use the vulnerability before disclosing it, and if the vulnerability is, in fact, patchable, according to Daniel.</p><p>In the post, Daniel wrote that the government should not “completely forgo” its practice of collecting zero-day vulnerabilities because it provides a way to “better protect our country in the long run.”</p><p>And while the process allows the government to retain vulnerabilities for its own use, it has tended to disclose them instead. NSA Director Admiral Michael Rogers, for instance, testified to the U.S. Senate Armed Services Committee in September 2016 that the NSA has a VEP disclosure rate of 93 percent, according to the memo which found a discrepancy in the rate.</p><p>“The NSA offers that 91 percent of the vulnerabilities it discovers are reported to vendors for vulnerabilities in products made or used in the United States,” the memo said. “The remaining 9 percent are not disclosed because either the vendor patches it before the review process can be completed or the government chose to retain the vulnerability to exploit for national security purposes.”</p><p>Jonathan Couch, senior vice president of strategy at ThreatQuotient, says that the U.S. government should not be expected to disclose all of the vulnerabilities it leverages in its offensive cyber espionage operations.</p><p>“Our government, just like other governments out there, is reaching out and touching people when needed; they leverage tools and capabilities to do that,” says Couch, who prior to working in the private sector served in the U.S. Air Force at the NSA. “You don’t want to invest a ton of money into developing capabilities, just to end up publishing a patch and patching against it.”</p><p>However, Couch adds that more could be done by agencies—such as the U.S. Department of Homeland Security (DHS)—that work with the private sector to push out critical patches on vulnerabilities when needed.</p><p>“Right now, I think they are too noisy; DHS will pass along anything that it finds—it doesn’t help you prioritize at all,” Couch says. “If DHS could get a pattern of ‘Here’s what we need to patch against, based on what we know and are allowed to share,’ then push that out and allow organizations to act on that.”</p><p>Other critics have also recommended that the government be more transparent about the VEP by creating clear guidelines for disclosing vulnerabilities and to “default toward disclosure with retention being the rare exception,” the CRS explained.</p><p>One of those recommendations was published by the Harvard Kennedy School’s Belfer Center for Science and International Affairs in Government’s Role in Vulnerability Disclosure: Creating a Permanent and Accountable Vulnerability Equities Process. </p><p>The paper, written by Ari Schwartz, managing director of cybersecurity services for Venable LLP and former member of the White House National Security Council, and Rob Knake, Whitney Shepardson senior fellow at the Council on Foreign Relations and former director for cybersecurity policy at the National Security Council, recommended the VEP be strengthened through formalization. </p><p>“By affirming existing policy in higher- level, unclassified governing principles, the government would add clarity to the process and help set a model for the world,” the authors explained. “If all the countries with capabilities to collect vulnerabilities had a policy of leaning toward disclosure, it would be valuable to the protection of critical infrastructure and consumers alike, as well as U.S. corporate interests.”</p><p>However, the authors cautioned that affirming this process does not mean that the government should publicize its disclosure decisions or deliberations.</p><p>“In many cases, it likely would not serve the interests of national security to make such information public,” according to Schwartz and Knake. “However, the principles guiding these decisions, as well as a high-level map of the process that will be used to make such decisions, can and should be public.”</p><p>U.S. lawmakers also agree that the VEP should be overhauled to boost transparency. In May, U.S. Senators Brian Schatz (D-HI), Ron Johnson (R-WI), and Cory Gardner (R-CO), and U.S. Representatives Ted Lieu (D-CA) and Blake Fernthold (R-TX) introduced legislation that would require a Vulnerabilities Equities Review Board comprising permanent members. These members would include the secretary of homeland security, the FBI director, the director of national intelligence, the CIA director, the NSA director, and the secretary of commerce. </p><p>Schatz said that the bill, called the Protecting Our Ability to Counter Hacking (PATCH) Act, strikes the correct balance between national security and cybersecurity.</p><p>“Codifying a framework for the relevant agencies to review and disclose vulnerabilities will improve cybersecurity and transparency to the benefit of the public while also ensuring that the federal government has the tools it needs to protect national security,” he explained in a statement.</p><p>Additionally, the secretaries of state, treasury, and energy would be considered ad hoc members of the board. Any member of the National Security Council could also be requested by the board to participate, if they are approved by the president, according to the legislation.</p><p>The bill has not moved forward in Congress since its introduction, which suggests that many do not see a need for an overhaul of the current disclosure system. </p><p>“It’s just not realistic for NSA, CIA, or the military or other international governments to start disclosing these tools they’ve developed for cyber espionage,” Couch says. ​ ​</p>GP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a4344465
https://sm.asisonline.org/Pages/Book-Review-Insider-Threats.aspxBook Review: Insider Threats<p>​Cornell University Press; cornellpress.cornell.edu; 216 pages; $89.95.</p><p>A collection of essays and case studies that originated in two workshops sponsored by the Global Nuclear Future Project of the American Academy of Arts and Sciences in 2011 and 2014, <em>Insider Threats</em> focuses on protecting the nuclear industry—but its lessons apply across many sectors.</p><p>The case studies are fascinating. A chapter devoted to the Fort Hood terrorist attack shows how changes in mission and procedures allowed information about the perpetrator to slip through the cracks. Instead of capturing warning signals, the systems scattered them. </p><p>Similar lessons were learned from the post–9/11 anthrax attacks in the United States. The author says that the suspect gained access to anthrax through “a complicated mix of evolving regulations, organizational culture, red flags ignored, and happenstance.”  </p><p>A real strength of this book is its root-cause analysis approach. Blame is rarely laid at the feet of incompetent people, but assigned to other factors like the unintended consequences of organizational design and known psychological tendencies. </p><p>The last chapter brings together all the lessons learned and cites 10 worst practices. For example, number seven is: “forget that insiders may know about security measures and how to work around them.” This chapter will be the most valuable to security practitioners because it offers a roadmap towards building an insider threat mitigation plan.</p><p><em>Insider Threats </em>is well-written, even literary. Its chief lesson: organizations are rarely designed to catch the insider, and much work needs to be done to protect them.</p><p><strong><em>Reviewer: Ross Johnson, CPP</em></strong><em>, is the senior manager of security and contingency planning for Capital Power, and infrastructure advisor for Awz Ventures. He previously worked as the security supervisor for an offshore oil drilling company in the Gulf of Mexico and overseas. Johnson is the author of Antiterrorism and Threat Response: Planning and Implementation.</em></p>GP0|#21788f65-8908-49e8-9957-45375db8bd4f;L0|#021788f65-8908-49e8-9957-45375db8bd4f|National Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465
https://sm.asisonline.org/Pages/How-to-Protect-PII.aspxHow to Protect PII<p>​<span style="line-height:1.5em;">If you are an employee, a student, a patient, or a client, your personally identifiable information (PII) is out there—and prime for hacking. In October, the U.S. Government Accountability Office (GAO) added protecting the privacy of PII to its list of high-risk issues affecting organizations across the country. All organizations, from large federal agencies to universities, hospitals, and small businesses, store PII about their employees, clients, members, or contracto</span><span style="line-height:1.5em;">rs. And, as seen in recent large-scale cyberattacks, PII is a hot commodity for malicious attackers. </span></p><p>According to the U.S. Office of Management and Budget, PII is any information that can be used alone or with other sources to uniquely identify, contact, or locate an individual. However, the definition of PII can depend on the context in which the information is used, according to Angel Hueca, information systems security officer with IT consulting company VariQ. For example, a name by itself is innocuous, but that name combined with a personal e-mail address, a Social Security number, or an online screenname or alias could give bad actors all they need to wreak havoc on a person or company.</p><p>And it appears that no one is immune to the risk of compromised PII. According to research by the GAO, 87 percent of Americans can be uniquely identified using only three common types of information: gender, date of birth, and ZIP code. </p><p>If PII is leaked, the consequences for both affected individuals and organizations can be damaging, says Hueca. Companies may face large fines or legal action if the PII they hold is breached, especially if the organization didn’t comply with outlined customer agreements or federal regulations, or if the breach violates the Health Insurance Portability and Accountability Act. A breach can also be reputation-damaging and cost the company employees and clients, Hueca notes. </p><p>Hueca stresses the importance of educating all employees, regardless of whether they have access to the company’s PII, about cybersecurity awareness and online behavior. Even using a personal e-mail at work or posting an image of their workspace on their social media account could lead to the leak of PII—there may be confidential information inadvertently documented in the photo, Hueca points out.</p><p>A more common occurrence is someone with access to an organization’s PII database inadvertently forwarding an e-mail with sensitive information, such as a client’s case number or an employee’s personal contact information. For example, in 2014, a Goldman Sachs contractor accidentally sent an e-mail with confidential brokerage account information to a Google e-mail address instead of to the contractor’s personal e-mail. Goldman Sachs went to the New York State Supreme Court to ask Google to block the recipient from accessing the e-mail to prevent a “needless and massive” data breach. The court didn’t rule on the case, because Google voluntarily blocked access to the e-mail.</p><p>Hueca says that segregating duties and tightly controlling who has access to certain information can help with this issue. Often, HR or administrative employees may need access to some PII, but not all of it—isolating potentially sensitive information can prevent harmful leaks. </p><p>How an organization’s network is set up can help prevent the accidental or malicious transfer of PII. Hueca suggests keeping sensitive information segregated from the rest of the network environment—if there is a breach, hackers will have to break through a second firewall to access the information. Organizations should also take advantage of standard content tracking software to spot suspicious activity.</p><p>“Fortunately, many organizations have something called content filtering, which are tools that are able to filter information as it comes in and out of the organization,” Hueca explains. “If there’s something that looks like a Social Security number, with nine digits, being sent out, the tool will alert an administrator that this activity is happening, which could be accidental or malicious.” </p><p>The U.S. Department of Homeland Security’s (DHS) handbook for safeguarding PII says only secure, organization-issued electronic devices should be used to view sensitive information. If an employee must access PII on a portable device, such as a laptop, USB, or external hard drive, the data should be encrypted. And if PII must be e-mailed within the office, DHS strongly recommends password-protecting the data within the e-mail. </p><p>Lastly, Hueca recommends that all companies have an incident response plan in place specifically for the malicious theft of PII. </p><p>“This is something that most organizations don’t think about, having an incident response plan specifically for a PII breach,” Hueca says. “What happens if you do get breached? What are the steps? Talk about what-ifs. Once you have a notification in place, you get alerted, what do you do? Try to segregate it from other sensitive data and figure out what happened.” </p>GP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a4344465