Social Engineering Review: Social Media Risk and GovernanceGP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a43444652016-11-01T04:00:00Z<p>​Kogan Page;; 232 pages; $37.95.</p><p>Phil Mennie is an international expert on social media, risk management, and information technology governance. His latest publication, <em>Social Media Risk and Governance</em>, is a must-read for the intermediate to advanced risk management security practitioner. It is a captivating book depicting the importance of identifying social media and information technology risks in an organization, outlining ways to address each of these risks immediately and to the benefit of an organization.</p><p>Governing the safety of social media inside and outside of the workplace is a challenging task. Mennie articulates a clear and concise social media strategy that can be adopted by risk management professionals both domestically and internationally, with specific protocols and tools. He uses example from real-world companies—like MasterCard—to support his points. Diagrams, matrixes, case studies, images, graphs, flowcharts, procedure assessment methods, and other forms of multimedia further support the text. </p><p>One shortcoming in the book is its lack of information on cloud computing. Many organizations are migrating to cloud-based storage options, such as OneDrive, Dropbox, and Google Drive. Research indicates that organizations should be very cautious about storing sensitive data in the cloud. The author reflects on the importance of data privacy, but does not expand on specific steps for properly uploading and transferring data to the cloud safely. </p><p>Also in the text, the author notes that certain legislation is being considered by several states and jurisdictions. However, the description is vague and does not contain specific pieces of legislation for reference.</p><p>The book urges technology professionals, compliance regulators, and risk management leaders to ask difficult questions: Is our organization embracing the power of social media? Are we keeping both internal and external stakeholders safe? What governance protocols do we have in place? How are we measuring the success of our protocols?</p><p>In sum, this book will benefit security professionals, social media experts, search engine optimization professionals, and risk managers. It is a true asset to the security management and information technology sector.</p><p>--</p><p><em>Reviewer: Thomas Rzemyk, Ed.D., <strong>is a professor of criminal justice at Columbia Southern University and director of technology and cybersecurity instructor at Mount </strong>Michael Benedictine School. He is a criminology discipline reviewer in the Fulbright Scholar Program, and he is a member of ASIS.</em></p>

Social Engineering Review: Social Media Risk and Governance Top Five Hacks From Mr. Robot—And How You Can Prevent Them the CEO Review: Cybervetting to Protect PII and Secure Review: Social Crime Review: Online Riskón-del-Internet-de-las-Cosas.aspx2015-11-12T05:00:00ZLa Revolución del Internet de las Cosas IOT Revolution a Man to Phish in Crisis and the Security Implications of the Internet of Things New Recruits Lone Terrorist Answers for Everyone Face in the Crowd Generation Security Awareness the Spearphisher's Barb

 You May Also Like... Virtual Lineup<p>​U.S. State and federal agencies are amassing databases of American citizens’ fingerprints and images. The programs were largely under the public radar until a governmental watchdog organization conducted an audit on them. The so-called “virtual lineups” include two FBI programs that use facial recognition technology to search a database containing 64 million images and fingerprints.</p><p>In May 2016, the U.S. Government Accountability Office (GAO) released Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy, a report on the FBI programs. Since 1999, the FBI has been using the Integrated Automated Fingerprint Identification System (IAFIS), which digitized the fingerprints of arrestees. In 2010, a $1.2 billion project began that would replace IAFIS with Next Generation Identification (NGI), a program that would include both fingerprint data and facial recognition technology using the Interstate Photo System (IPS). The FBI began a pilot version of the NGI-IPS program in 2011, and it became fully operational in April 2015. </p><p>The NGI-IPS draws most of its photos from some 18,000 federal, state, and local law enforcement entities, and consists of two categories: criminal and civil identities. More than 80 percent of the photos are criminal—obtained during an arrest—while the rest are civil and include photos from driver’s licenses, security clearances, and other photo-based civil applications. The FBI, which is the only agency able to directly access the NGI-IPS, can use facial recognition technology to support active criminal investigations by searching the database and finding potential matches to the image of a suspected criminal. </p><p>Diana Maurer, the director of justice and law enforcement issues on the homeland security and justice team at GAO, explains to Security Management that the FBI can conduct a search for an active investigation based on images from a variety of sources—camera footage of a bank robber, for example. Officials input the image to the NGI-IPS, and the facial recognition software will return as many as 50 possible matches. The results are investigative leads, the report notes, and cannot be used to charge an individual with a crime. A year ago, the FBI began to allow seven states—Arkansas, Florida, Maine, Maryland, Michigan, New Mexico, and Texas—to submit photos to be run through the NGI-IPS. The FBI is working with eight additional states to grant them access, and another 24 states have expressed interest in using the database.</p><p>“The fingerprints and images are all one package of information,” Maurer says. “If you’ve been arrested, you can assume that you’re in, at a minimum, the fingerprint database. You may or may not be in the facial recognition database, because different states have different levels of cooperation with the FBI on the facial images.”</p><p>The FBI has a second, internal investigative tool called Facial Analysis, Comparison, and Evaluation (FACE) Services. The more extensive program runs similar automated searches using NGI-IPS as well as external partners’ face recognition systems that contain primarily civil photos from state and federal government databases, such as driver’s license photos and visa applicant photos. </p><p>“The total number of face photos available in all searchable repositories is over 411 million, and the FBI is interested in adding additional federal and state face recognition systems to their search capabilities,” the GAO report notes.</p><p>Maurer, who authored the GAO report, says researchers found a number of privacy, transparency, and accuracy concerns over the two programs. Under federal privacy laws, agencies must publish a Systems of Records Notice (SORN) or Privacy Impact Assessments (PIAs) in the Federal Register identifying the categories of individuals whose information is being collected. Maurer notes that the information on such regulations is “typically very wonky and very detailed” and is “not something the general public is likely aware of, but it’s certainly something that people who are active in the privacy and transparency worlds are aware of.” </p><p>GAO found that the FBI did not issue timely or accurate SORNs or PIAs for its two facial recognition programs. In 2008, the FBI published a PIA of its plans for NGI-IPS but didn’t update the assessment after the program underwent significant changes during the pilot phase—including the significant addition of facial recognition services. Additionally, the FBI did not release a PIA for FACE Services until May 2015—three years after the program began. </p><p>“We were very concerned that the Department of Justice didn’t issue the required SORN or PIA until after FBI started using the facial recognition technology for real world work,” Maurer notes. </p><p>Maurer says the U.S. Department of Justice (DOJ)—which oversees the FBI—disagreed with the GAO’s concerns over the notifications. Officials say the programs didn’t need PIAs until they became fully operational, but the GAO report noted that the FBI conducted more than 20,000 investigative searches during the three-year pilot phase of the NGI-IPS program. </p><p>“The DOJ felt the earlier version of the PIA was sufficient, but we said it didn’t mention facial recognition technology at all,” Maurer notes. </p><p>Similarly, the DOJ did not publish a SORN that addressed the collection of citizens’ photos for facial recognition capabilities until GAO completed its review. Even though the facial recognition component of NGI-IPS has been in use since 2011, the DOJ said the existing version of the SORN—the 1999 version that addressed only legacy fingerprint collection activities—was sufficient. </p><p>“Throughout this period, the agency collected and maintained personal information for these capabilities without the required explanation of what information it is collecting or how it is used,” the GAO report states.</p><p>It wasn’t until May 2016—after the DOJ received the GAO draft report—that an updated SORN was published, Maurer notes. “So they did it very late in the game, and the bottom line for both programs is the same: they did not issue the SORNs until after both of those systems were being used for real world investigations,” Maurer explains. </p><p>In the United States, there are no federally mandated repercussions for skirting privacy laws, Maurer says. “The penalty that they will continue to pay is public transparency and scrutiny. The public has very legitimate questions about DOJ and FBI’s commitment to protecting the privacy of people in their use of facial recognition technology.”</p><p>Another concern the GAO identified is the lack of oversight or audits for using facial recognition services in active investigations. The FBI has not completed an audit on the effectiveness of the NGI-IPS because it says the program has not been fully operational long enough. As with the PIA and SORN disagreements, the FBI says the NGI-IPS has only been fully operational since it completed pilot testing in April 2015, while the GAO notes that parts of the system have been used in investigations since the pilot program began in 2011. </p><p>The FBI faces a different problem when it comes to auditing its FACE Services databases. Since FACE Services uses up to 18 different databases, the FBI does not have the primary authority or obligation to audit the external databases—the responsibility lies with the owners of the databases, DOJ officials stated. “We understand the FBI may not have authority to audit the maintenance or operation of databases owned and managed by other agencies,” the report notes. “However, the FBI does have a responsibility to oversee the use of the information by its employees.” </p><p>Audits and operational testing on the face recognition technology are all the more important because the FBI has conducted limited assessments on the accuracy of the searches, Maurer notes. FBI requires the NGI-IPS to return a correct match of an existing person at least 85 percent of the time, which was met during initial testing. However, Maurer points out that this detection rate was based on a list of 50 photos returned by the system, when sometimes investigators may request fewer results. Additionally, the FBI’s testing database contained 926,000 photos, while NGI-IPS contains about 30 million photos.</p><p>“Although the FBI has tested the detection rate for a candidate list of 50 photos, NGI-IPS users are able to request smaller candidate lists—specifically between two and 50 photos,” the report states. “FBI officials stated that they do not know, and have not tested, the detection rate for other candidate list sizes.” </p><p>Maurer notes that the GAO recommendation to conduct more extensive operational tests for accuracy in real-world situations was the only recommendation the FBI agreed with fully. “It’s a start,” she says. </p><p>The FBI also has not tested the false positive rate—how often NGI-IPS searches erroneously match a person to the database. Because the results are not intended to serve as positive identifications, just investigative leads, the false positive rates are not relevant, FBI officials stated.</p><p>“There was one thing they seemed to miss,” Maurer says. “The FBI kept saying, ‘if it’s a false positive, what’s the harm? We’re just investigating someone, they’re cleared right away.’ From our perspective, the FBI shows up at your home or place of business, thinks you’re a terrorist or a bank robber, that could have a really significant impact on people’s lives, and that’s why it’s important to make sure this is accurate.”</p><p>The GAO report notes that the collection of Americans’ biometric information combined with facial recognition technology will continue to grow both at the federal investigative level as well as in state and local police departments.</p><p>“Even though we definitely had some concerns about the accuracy of these systems and the protections they have in place to ensure the privacy of the individuals who are included in these searches, we do recognize that this is an important tool for law enforcement in helping solve cases,” Maurer says. “We just want to make sure it’s done in a way that protects people’s privacy, and that these searches are done accurately.”</p><p>This type of technology isn’t just limited to law enforcement, according to Bloomberg’s Hello World video series. A new Russian app, FindFace, by NTechLab allows its users to photograph anyone they come across and learn their identity. Like the FBI databases, the app uses facial recognition technology to search a popular Russian social network and other public sources with a 70 percent accuracy rate—the creators of the app boast a database with 1 billion photographs. Moscow officials are currently working with FindFace to integrate the city’s 150,000 surveillance cameras into the existing database to help solve criminal investigations. But privacy advocates are raising concerns about other ways the technology could be used. For example, a user could learn the identity of a stranger on the street and later contact that person. And retailers and advertisers have already expressed interest in using FindFace to target shoppers with ads or sales based on their interests. </p><p>  Whether it’s a complete shutdown to Internet access or careful monitoring of potentially dangerous content, countries and companies around the world are taking advantage of the possibilities—and power—inherent in controlling what citizens see online. As criminals and extremists move their activities from land and sea to technology, governments must figure out how to counter digital warfare while simultaneously respecting and protecting citizens’ basic human right to Internet access.​ ​</p>GP0|#21788f65-8908-49e8-9957-45375db8bd4f;L0|#021788f65-8908-49e8-9957-45375db8bd4f|National Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465 for Everyone<p> It’s ubiquitous. From targeted ads on Facebook to customer loyalty cards to Gmail cookies, companies are hungry for information about you. Business intelligence—the gathering and analyzing of information for purposes of commerce—is rapidly advancing, on several fronts, not least in security. The amount of information available to organizations and employees is ever increasing. Big Data keeps getting bigger, analytical methods grow more and more sophisticated, and the number of tools available to extract meaning from that information multiply. </p><p>The most prevalent trend in business intelligence, some experts say, is the democratization of data crunching. The use of sophisticated analytical tools is no longer the exclusive province of one or two specialized analysts in the organization. Instead, these tools are being made available to employees on the front lines, whether they be members of a sales team or security officers working at a remote location. Mobile applications and cloud computing are making access to these tools easier.</p><p>Here, Security Management takes a look at few examples of cutting-edge business intelligence practices and how they apply to security, such as a solution derived from creative analysis of social media data and the mobile use of integrated analytics for crisis management situations. We then look at the big picture and survey some broader trends in business intelligence and their relationship to security, and take a peek at a few challenges the future may hold. </p><p>Social Media </p><p>Social media monitoring is becoming a popular practice in the business community. Whether it be for a reputation management program or for obtaining feedback on a particular service or product, more organizations are monitoring channels like Twitter and Facebook. </p><p>For example, international security expert Hart Brown has developed a business intelligence tool that goes beyond monitoring. Brown, who sits on the ASIS International Crisis Management and Business Continuity Council, is an intelligence veteran who has worked at both the U.S. Department of Defense and the U.S. State Department. A few years ago, Brown was international security manager for a company that was highly active in various regions of Mexico. Given its engagement, the firm needed timely news coverage of all the regional markets in Mexico that it was involved in. </p><p>But this proved hard to come by. In regions outside of major cities, there was often sparse coverage; CNN-type breaking news reports did not exist. And sometimes, when sufficient media were present, news agencies were pressured by criminal cartels not to report certain developments. “In that country, news is very complicated, and in many cases censored,” Brown says. “We just could not get reliable information about what was going on.” Twitter, however, had the reach that traditional media did not. </p><div><p>As Brown describes it, he was in need of a system that would accomplish two main objectives. First, he needed an early warning or alert system that the stability of a particular town or region was being threatened. Be it a fire, earthquake, gunfight, kidnapping, or some other event, he wanted to know as soon as the incident started happening.</p><p>Second, he wanted to be able to gauge the event’s severity—specifically, how disruptive it would be, and whether its impact was increasing or diminishing over time. This included an ability to assess how much stability had returned the day after an event, which would help the company decide if it had to alter its operations on the ground. A straightforward social media monitoring system would not be sufficient to achieve these two objectives, according to Hart: “It certainly wasn’t enough for me. We had to put some analytics to it,” he says. </p><p>So Brown built a solution through the use of Netvibes, a program popular in the advertising and marketing fields for social media and news tracking. First, he had to ensure that he knew the language spoken by the local community. Whatever the event, he learned the various phrasings used to describe it, including colloquialisms that locals might use on Twitter. He did this by combing through volumes of reports of traditional media and identifying keywords to use in the algorithm. </p><p>He then established baselines for the keywords, which represented how many times they would occur in normal everyday Twitter discourse. Brown could then measure the rate of change when an event occurred and usage of the keyword shot up. For example, on a normal day without incident, the Spanish word for gunfight may occur 10 times—in innocent contexts, such as in a movie description. When a real gunfight occurs, the usage number may spike to 100, or a rate of change of 10 times the baseline.</p><p>Brown arranged for the system to send out an e-mail alert when the spike reached a certain level–signifying a noteworthy event was under way. Typically, such an alert would go out less than hour after the actual start of the event—a testament to the real-time power of Twitter.</p><p>Once the tool saw frequent use, it became evident that the steepness in the keyword usage spike correlated to the severity of the incident in question. For example, in April, the city of Tampico “turned into a war zone” due to violence from drug cartels and gangs. “We were able to see the war was starting within an hour,” Brown says. The spike was roughly 40 times above the normality baseline, and from that steep spike Brown could tell that the local reaction was serious enough to drive many residents and businesses into lockdown mode. “As far as the initial shock—there’s absolutely a correlation,” Brown says. </p><p>The correlation is so solid that it helps Brown make real-life operations decisions. For example, after one violent event, Brown was unsure whether the company’s equipment trucks could drive through the area. Brown gauged the level of chatter, and made the assessment: “There’s a lot of checkpoints and it’s going to be slow, but there’s not violence.” The trucks were sent forward; the assessment held true. <br><br></p><p>Crisis Intelligence </p><p>Brown’s intelligence tool, in essence, uses social media data to analyze the extent of an event’s destabilizing force. Some businesses, however, use intelligence tools that deploy analytics on the fly, and in equally challenging situations.</p><p>Imagine, for example, that you are a chief of security for a large company that has a strong presence in Colombia. There is an earthquake in Bogota, where your company has several offices and many employees. The city is engulfed in chaos, and your employees have no idea who might be affected, or if anyone is in distress and needs assistance.</p><p>Such a situation demands a rapid analysis of all available information, so that some sort of response can be taken. However, “you can’t act if you don’t have good information, and you don’t know where your people are,” says Dan Richards, CEO of Global Rescue, an emergency evacuation and field rescue firm. </p><p>During these challenging situations, some firms use a type of business intelligence tool that consolidates different platforms within crisis management and response environments into a mobile application, Richards says. These types of systems combine and correlate different data sets, such as the firm’s enterprise footprint and the parameters of the event, to give each user a quick and clear picture of where employees and assets are and what areas of the city have been affected. </p><p>These tools also integrate with a communication component that allows for messages to be sent to selected employees or to everyone. The system tracks who received and replied to messages and who did not, analyzes this information, and then continually updates each employee’s status.</p><p>“When you look at any major crisis when there’s a lot of people involved, a lot of time is wasted in trying to confirm that people who may be in distress are actually hurt,” Richards says. The system also keeps track of all operational responses that the company has taken in real time and automatically informs employees who need to know such updates. </p><p>In addition, these systems can be set up to periodically ping a staffer’s smartphone, so that the return ping “leaves a breadcrumb trail” as to the employee’s location, Richards says. In this way, if an event like an earthquake or flood disables a staffer’s device, the last location before the device stopped working can be ascertained.</p><div><p>In Richard’s view, the use of such business intelligence systems for crisis management is growing, in part because “there’s relatively lean staffing in security.” A company of 10,000 employees, for example, may have only six crisis management executives. “That’s not an advantageous ratio,” Richards says. “You need to have a set of tools that will be extraordinarily effective.” <br><br></p><p>Data Analysis</p><p>A tool such as the one Richards describes, which tracks the whereabouts and status of employees in the field, may also be used in noncrisis situations by a company with a highly mobile work force. “With more people working at home, and off site, keeping track of this decentralized work force has become an increasing challenge,” Richards says.</p><p>But whether it is used in chaotic or calm times, it can be used by any employee who needs to know the status of workers in the field. And that’s reflective of a current trend discussed in a report, The Top Ten Business Intelligence Trends for 2014, recently issued by Tableau Software.</p><p>The report finds that the practice of data science is moving from the high-level specialist to the employee in the business community. Data analysis is becoming part of the skill set of ordinary business users, not just a few experts. “We’re starting to see a mass adoption of data tools,” says Ellie Fields, a vice president at Tableau, which specializes in business intelligence. </p><p>Part of this trend is what Fields calls “embedded analytics.” More firms are making analytical tools available to employees on the front lines, such as members of a traveling sales force or security guards patrolling a site. By way of explanation, Fields offers a hypothetical scenario: “Wouldn’t it be great if security guards knew that between 1 and 3 is the time when most threats happen, and that they usually happen on this side of the perimeter?”  </p><p>And that security officer who uses a mobile application for a crime data analysis may also be representative of another business intelligence trend—the increased use of predictive analytics. “We’re collecting data on things we didn’t used to have,” Fields says, and that means there is more raw material to analyze and construct sophisticated performance prediction models. “Now people are saying, ‘Let’s see if we can predict when we will have machine failure, based on past results,’” she says.  </p><p>The increased use of business analytical tools has intersected with the rise of cloud computing, and this combination has spawned another recent trend: cloud analytics. So far, this has not occurred on a wide scale, as some organizations still have security concerns about moving sensitive data to the cloud. “I don’t think the three-letter agencies are adopting the cloud anytime soon,” Fields says. </p><p>But other organizations have become comfortable with cloud security and have embraced the concept. Cloud storage can make data access from mobile devices easier; the same advantages apply to analytical programs in the cloud, which can be accessed from mobile devices, like an iPad, and make for more agile, self-serve intelligence, Fields explains. <br><br></p><p>Big Data</p><p>Overload is not the only challenge when it comes to the advance of business intelligence and the growing reliance on Big Data. The increased use of intelligence tools will likely also increase privacy concerns. Take, for example, the crisis intelligence tool that pings smartphones to track the recipient’s whereabouts. Such knowledge could be abused. “Some humans don’t want to be found,” Richards says. “As a society, we will have to grapple with those issues.” </p><p>Data collection itself, even for business purposes, can also be viewed as intrusive. To take just one example, Amazon is now offering brick-and-mortar stores a payment-processing device, called Local Register, which will allow the online giant to track a consumer’s offline purchases. Such technologies will spur more discussion about letting people opt out of some data collection processes. </p><p>Moreover, while business intelligence tools are indeed becoming much more common, the skills needed to use those tools to best advantage are less widespread, Brown says. </p><p>This is particularly true for analytic tools that require queries to obtain information. “Everyone wants a piece of the Big Data scene, but what you find is that it becomes very, very complicated and the queries that you are using become very sensitive,” Brown says. “We have a lot of people using analytics that may not really understand what it is they are querying. Every minor change in the query can have a significant impact on findings,” Brown said.  </p><p>Overall, the proper use of intelligence tools is an “art meets science” proposition, and collectively the business community “still has a ways to go” before analytical data skills become commonplace among company staff, Brown says. “I don’t think we’ve reached the point now where we can fully migrate from analysts.” <br></p></div></div>GP0|#28ae3eb9-d865-484b-ac9f-3dfacb4ce997;L0|#028ae3eb9-d865-484b-ac9f-3dfacb4ce997|Strategic Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465 to Protect PII<p>​<span style="line-height:1.5em;">If you are an employee, a student, a patient, or a client, your personally identifiable information (PII) is out there—and prime for hacking. In October, the U.S. Government Accountability Office (GAO) added protecting the privacy of PII to its list of high-risk issues affecting organizations across the country. All organizations, from large federal agencies to universities, hospitals, and small businesses, store PII about their employees, clients, members, or contracto</span><span style="line-height:1.5em;">rs. And, as seen in recent large-scale cyberattacks, PII is a hot commodity for malicious attackers. </span></p><p>According to the U.S. Office of Management and Budget, PII is any information that can be used alone or with other sources to uniquely identify, contact, or locate an individual. However, the definition of PII can depend on the context in which the information is used, according to Angel Hueca, information systems security officer with IT consulting company VariQ. For example, a name by itself is innocuous, but that name combined with a personal e-mail address, a Social Security number, or an online screenname or alias could give bad actors all they need to wreak havoc on a person or company.</p><p>And it appears that no one is immune to the risk of compromised PII. According to research by the GAO, 87 percent of Americans can be uniquely identified using only three common types of information: gender, date of birth, and ZIP code. </p><p>If PII is leaked, the consequences for both affected individuals and organizations can be damaging, says Hueca. Companies may face large fines or legal action if the PII they hold is breached, especially if the organization didn’t comply with outlined customer agreements or federal regulations, or if the breach violates the Health Insurance Portability and Accountability Act. A breach can also be reputation-damaging and cost the company employees and clients, Hueca notes. </p><p>Hueca stresses the importance of educating all employees, regardless of whether they have access to the company’s PII, about cybersecurity awareness and online behavior. Even using a personal e-mail at work or posting an image of their workspace on their social media account could lead to the leak of PII—there may be confidential information inadvertently documented in the photo, Hueca points out.</p><p>A more common occurrence is someone with access to an organization’s PII database inadvertently forwarding an e-mail with sensitive information, such as a client’s case number or an employee’s personal contact information. For example, in 2014, a Goldman Sachs contractor accidentally sent an e-mail with confidential brokerage account information to a Google e-mail address instead of to the contractor’s personal e-mail. Goldman Sachs went to the New York State Supreme Court to ask Google to block the recipient from accessing the e-mail to prevent a “needless and massive” data breach. The court didn’t rule on the case, because Google voluntarily blocked access to the e-mail.</p><p>Hueca says that segregating duties and tightly controlling who has access to certain information can help with this issue. Often, HR or administrative employees may need access to some PII, but not all of it—isolating potentially sensitive information can prevent harmful leaks. </p><p>How an organization’s network is set up can help prevent the accidental or malicious transfer of PII. Hueca suggests keeping sensitive information segregated from the rest of the network environment—if there is a breach, hackers will have to break through a second firewall to access the information. Organizations should also take advantage of standard content tracking software to spot suspicious activity.</p><p>“Fortunately, many organizations have something called content filtering, which are tools that are able to filter information as it comes in and out of the organization,” Hueca explains. “If there’s something that looks like a Social Security number, with nine digits, being sent out, the tool will alert an administrator that this activity is happening, which could be accidental or malicious.” </p><p>The U.S. Department of Homeland Security’s (DHS) handbook for safeguarding PII says only secure, organization-issued electronic devices should be used to view sensitive information. If an employee must access PII on a portable device, such as a laptop, USB, or external hard drive, the data should be encrypted. And if PII must be e-mailed within the office, DHS strongly recommends password-protecting the data within the e-mail. </p><p>Lastly, Hueca recommends that all companies have an incident response plan in place specifically for the malicious theft of PII. </p><p>“This is something that most organizations don’t think about, having an incident response plan specifically for a PII breach,” Hueca says. “What happens if you do get breached? What are the steps? Talk about what-ifs. Once you have a notification in place, you get alerted, what do you do? Try to segregate it from other sensitive data and figure out what happened.” </p>GP0|#91bd5d60-260d-42ec-a815-5fd358f1796d;L0|#091bd5d60-260d-42ec-a815-5fd358f1796d|Cybersecurity;GTSet|#8accba12-4830-47cd-9299-2b34a4344465