Mobile Security the RobotsGP0|#69b4a912-eafa-43d2-b6a4-8aed47f69245;L0|#069b4a912-eafa-43d2-b6a4-8aed47f69245|Security Technology;GTSet|#8accba12-4830-47cd-9299-2b34a43444652018-11-01T04:00:00Z, Holly Gilbert Stowell<p>More than 80 million customers entrust Credit Karma with their personal data, so the financial services provider says it puts security at the forefront of its operations.  <br><br>“Security is woven into everything that we do,” says Luis Cortez, physical security manager at Credit Karma. “You name it, we have stringent controls around it. It’s a highly regulated environment.”<br><br>Headquartered in San Francisco in a building that exceeds 100,000 square feet, the company was recently looking for a way to augment its contracted security guards who provide around-the-clock coverage, Cortez says. “We’re not able to be everywhere at every time,” he notes. “They can’t be everywhere at the same time and they can’t complete as many patrols…. From that perspective, an officer—a human being—can only do so much.”</p><p>While robotics is a growing market within the security industry, Cortez explains that Credit Karma couldn’t hire just any futuristic machine as a force multiplier. The organization needed a solution that would respect the privacy of its members and only collect the information it was supposed to. “Being in such a highly regulated industry, we wanted to make sure it wasn’t anything too intrusive and didn’t collect too much data,” Cortez says. </p><p>Credit Karma looked into several robotics and facial recognition solutions but wasn’t finding a pro<img src="/ASIS%20SM%20Article%20Images/1118%20Case%20Study%20Stats.png" class="ms-rtePosition-1" alt="" style="margin:5px;width:194px;height:675px;" />duct that met its high standards for data privacy. “We weren’t finding anything that met our standards. Then one of our security engineers actually referred us to Cobalt Robotics, and said, ‘You may want to check these guys out, they’re doing some amazing things,’” Cortez says.</p><p><br>By partnering with Cobalt, a tech start-up that produces roving security robots, Credit Karma says it received a two-way channel of communication and collaboration. “The implementation wouldn’t have been possible without that deep partnership with Cobalt and understanding the technology on both ends,” Cortez says. <br><br></p><p>The first two robots were deployed at Credit Karma’s headquarters in the summer of 2017. Beginning at 8:00 p.m., there are two robots that patrol two separate floors of the headquarters building (one robot per floor). The machines look like slim, tall kiosks with screens that can read badges, display alerts and instructions, and provide two-way interaction with a human operator at Cobalt’s monitoring center. <br></p><p>“It helps us understand usage, how many folks are in our office, how many folks are in our spaces, and it helps us authenticate that they are employees that are supposed to be on site,” Cortez says. “On that specific floor that the robot is on, it will be able to tell us as of 8:00 or 10:00 that day, ‘We were able to count this many people that we ran into.’” </p><p><br>The machines also work as a visitor verification system by matching the person on site with existing access control records. “It can always double-check and verify the visitor is properly checked in,” Cortez adds.</p><p><br>The robots can perform critical tasks in the event of an emergency, like reporting whether a floor has been cleared during an evacuation. They also perform more simple tasks, like detecting leaks, spills, and broken lights. </p><p><br>“Whenever it sees something out of the ordinary or sees an incident, it will contact one of the Cobalt specialists, and that individual will then escalate the response as necessary,” he notes. “It’s not just user-friendly and analytical—it’s also a moving, roving alarm system.” </p><p><br>If an incident or anomaly has been detected, the machine sends an alert to someone internally on the escalation list at Credit Karma, who is connected to a live human operator. “The Cobalt specialist contacts the individual on site and lets them know, ‘Hey this is going on, can we please verify?’” Cortez says. <br>At that point a security officer or staff member is dispatched to check out the situation. “Once that verification is made, we then make the determination, ‘Yes, contact the authorities,’ or, we can handle this internally.’” </p><p><br>The wealth of sensors and cameras on the robot provide real-time intelligence for the Credit Karma team. “One thing that’s been really useful for us is the unusual noise recognition,” Cortez notes. “Anything that happens above a certain decibel, the robot comes and takes a look.” </p><p><br>Daily, weekly, and monthly reports are generated that help the company detect incident patterns, or plan for future security needs. “From a technology standpoint it definitely helps us. The more data you have, the more you’re able to quantify and qualify what you need to accomplish,” he says. “And in the security industry, better numbers make for a safer location—and it makes our employees feel safer.”</p><p><br>Cortez notes the human operator aspect provides an extra level of comfort when an incident occurs. “In the event that you are having an issue, the operator can provide those calming words and say, ‘How may I help you? I’m here,’” he says. “You’re not just speaking to a machine or to an intercom—it’s that fast, rapid response of an actual individual being right there and then with you.”</p><p><br>Credit Karma is currently looking into deploying a third robot for the building, and notes the possibilities are endless when it comes to what the robots can do. “The robot isn’t just a data collection machine, it’s a combination of live assistance and automation,” Cortez explains. “Its capabilities for expansion have really been huge to help us move our security and our enterprise forward. </p><p><br>Robots may still be thought of as something out of science fiction, but at Credit Karma, the machines are providing on-the-ground security. “It’s not gimmicky, it’s not an Internet of Things device,” Cortez says. “It’s actually a helpful tool for collection and a force multiplier for the human aspect of security.”  <br><br><em>For More Information: Travis Deyle, <a href="mailto:%[email protected]">[email protected]</a>, <a href="">,</a> 650.781.3626.​</em><br><br></p>

Mobile Security the Robots for Remote Workers Mayhem The Force Multiplier‘Catastrophic,’-Survey-Finds.aspx2017-05-31T04:00:00ZSecurity Incidents Caused By IoT Devices Could Be ‘Catastrophic,’ Survey Finds Warns Congress Of Security Threats to Government Mobile Devices Travel Tips Review: Secrets Spotlight: Internet of Things Toward Disaster Cameras: Fashion or Function? Glossary Stoppers the BYOD Revolution Could Allow Cell Phones to Detect Bioagents Devices't Touch that Dial: Considerations When Using Satphones in Conflict Zones Monitoring the Emergence of Mobile Malware Built to Mine Location Data Worry Criminals Use Smartphones To Monitor Radio Traffic$00.000025: The Going Rate On The Black Market For Your Email Address

 You May Also Like... Manual of Private Investigation Techniques<div class="body"> <p> <em> <strong> <span style="color:red;">*****</span> A Manual of Private Investigation Techniques. Edited by William F. Blake. Charles C. Thomas Publishers, Ltd.;; 326 pages; $39.95; also available as e-book. </strong> </em> </p> <p>The editor of this volume was able to amass an amazing number of beneficial articles for both aspiring and experienced investigators. Although clearly developed for private investigators, its breadth of topics pertaining to various types of investigations gives it significance for investigators working in the public sector as well.</p> <p>The book presents the reader with an array of interesting essays on useful topics such as premises liability, undercover operations, integrity investigations, protecting assets, mortgage fraud, arson investigations, and homicide investigations. Many other investigative topics are explored in this tome as well.</p> <p>The authors of these articles often incorporate information on how the various types of investigations should be conducted. There is worthwhile information in these articles that will enable private investigators to educate their respective clients on potential issues in their businesses that could create vulnerabilities for criminal exploitation. Collectively, the contributing authors adequately spell out the applicable best investigative practices as they survey the various types of investigations.</p> <p>In short, this work is a valuable contribution to the field of investigation, especially in the private sector. The editor did a superb job of collecting meaningful articles pertaining to the study of investigation as well as the investigative process.<br></p> <hr /> <span style="color:#800000;"> <strong>Reviewer: </strong> </span>Hugh J. Martin is a retired police chief from Wisconsin. He is a graduate of the FBI National Academy and a member of ASIS. <p></p></div>GP0|#cd529cb2-129a-4422-a2d3-73680b0014d8;L0|#0cd529cb2-129a-4422-a2d3-73680b0014d8|Physical Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465 Study: Loss Prevention at Stew Leonard’s Farm Fresh Foods<p>In 1969, a Norwalk, Connecticut, dairy farmer named Stew Leonard had an epiph­­any: the milk delivery business was going to go the way of the horse and buggy. Leonard decided to found a small dairy shop with seven employees and carrying just eight items. The little market quickly grew into the world’s largest dairy store. Today, Stew Leonard’s Farm Fresh Foods are located in Norwalk, Danbury, and New­ington, Connecticut; and Yonk­ers, New York. It is a $300 million annual enterprise with approximately 2,000 employees.</p><p>The stores sell more than 6,000 items—everything from meats to wine to bakery goods to gifts—in a farmer’s market atmosphere where the primary rule is literally chiseled in stone on three-ton granite slabs parked at the entrance to each store: “The Customer is Always Right.” In addition to its commitments to customer service, Stew Leon­ard’s is regarded as the Disneyland of dairy stores because of its costumed characters, petting zoo, and animatronics. </p><p>The loss prevention (LP) department at Stew Leonard seeks creative solutions. It hasn’t gone so far as to use the company’s six-foot-tall dancing bananas as spy cams, but it has put into place an innovative video synopsis technology that enables security to save copious staff hours during investigations.</p><p>“We’re a relatively small security department with a significant enclave of cameras throughout our buildings,” says Bruce Kennedy, Stew Leonard’s director of LP and logistics. “We have close to 500 cameras in the network.” This becomes an issue when investigations of thefts, accidents, and other issues occur. “One investigation can take eight to 12 hours—especially if there is video involved,” he states.</p><p>About two years ago, the loss prevention team began looking for a solution to shorten the process of reviewing CCTV video. “We looked at the entire gamut of solutions…. We attended several trade shows, took a look at white papers on the Internet, and I spoke to a lot of my peers in the industry,” Kennedy recalls, adding that the solution they were seeking had to be “cost-feasible; it had to integrate seamlessly, and had to be easy to use—that was probably the most important thing.” </p><p> </p><p>LP evaluated all the technologies and narrowed it down to the two strongest con­tend­ers. “They were separated by extremes,” he says. “One involved sending the video to a third party to review and after a week or two, we’d get a response back. That was too long. We wanted to provide answers as quickly as we could.”</p><p>The other contender was BriefCam, by BriefCam Ltd. of Neve Ilan, Israel, an award-winning product suggested by an integrator who had worked with Stew Leonard’s on its CCTV system. (BriefCam was the winner of the 2011 ASIS International Accolades Award for Surveillance, as well as the 2010 Wall Street Journal Technology Innovation Award for Physical Security, and other honors.) </p><p>Kennedy contacted the company. “They were just starting to get a hold in the industry in the United States. They said, ‘Give us a try,’ and from the onset we saw the possibilities.”</p><p>BriefCam offers two solutions: BriefCam Video Synopsis (VS) Enterprise and VS Forensics, both of which use the same technology to allow video reviews to proceed at a greatly quickened pace. VS Enterprise can fully integrate with almost all IP cameras and digital video recorders (DVRs) or networked video recorders (NVRs). The software interface allows users to pull re­corded footage by specific dates and times; it then uses video-analytic software to compress the footage into a radically shortened synopsis. One hour of footage can be viewed in an average of one minute.</p><p>Kennedy explains that what he and his team see when they review the footage is a series of superimposed streams of activity—for example, if the camera feed is from a camera focused on the entrance to one of the Stew Leonard’s stores, the synopsis will show every customer or employee coming and going, each one with a time stamp following them along. “When we see something of interest, we click on the time stamp or the image,” he says. “BriefCam isolates that specific video stream.” </p><p> </p><p>Asked if the superimposed images are confusing, Kennedy responds that they are not, especially because in the majority of investigations the security staff already knows what to look for. “So, if we are looking for a customer in a red baseball cap, once we see him, we click on him, and everyone else goes away except that customer,” he states.</p><p>The BriefCam VS Forensics creates the same time-stamped synopses, but is a standalone, offline application that does not require integration with a DVR or NVR. Users import the video into VS For­ensics to create the time-compression synopses.</p><p>Kennedy decided to employ both the VS Enterprise and VS Forensics at Stew Leonard’s. The installation took place last autumn at all the stores and also at Stew Leonard’s Wine Stores, which is a legally separate entity that contracts the use of Stew Leonard’s name and human resources, public relations, loss prevention, and security services.</p><p>During integration, there were some technical kinks, “but the BriefCam staff were absolutely great,” he says. </p><p>“Early in the process,” he recalls, “as we spoke to vendors, we told them that we were looking for a partnership with Stew Leonard’s. This is not going to be a purchase and out-the-door kind of thing. We wanted to be able to call someone and get a response back to any critical issues we came across. They have come through on that.”</p><p>A BriefCam technical specialist worked with Stew Leonard’s IT department during the installation, placing the VS Enterprise software on the servers that operate the cameras covering critical areas, such as money rooms and cash registers. “Those are dialed directly into the software application so we can pull up synopses immediately,” Kennedy says. </p><p> </p><p>Synopses from noncritical cameras are created by VS Forensics on an as-needed basis, rather than being connected directly. “We can pull video from a specific camera, from any server we want, and we can export it—bring it into BriefCam VS Forensics…. It gives you the exact same synopsis as VS Enterprise will, there’s just an extra step in between.”</p><p>Ease of use was important to security, and Kennedy says the pair of solutions have not disappointed in that regard. “Within five to 10 minutes of sitting down with the product, [LP officers] were fully trained,” he states.</p><p>And after seven months of use, Ken­nedy couldn’t be more pleased. “It has worked very well. There have been a lot of successes with investigations we’ve done. We’ve also used it in nonconventional LP aspects of business. For example, it is deployed at some of our wine stores; and we use it there for marketing purposes, product flow, customer flow, and to monitor activity during tastings. We give our feedback to non-LP folks, the marketers, and the buyers,” he says.</p><p>Kennedy also states that VS Enterprise and VS Forensics “have already paid for themselves. Return on investment was in less than six months.”<br><em><br>(For more information: BriefCam, Ltd.; e-mail: [email protected]; Web: <a title="" href=""><span style="text-decoration:underline;"><font color="#0066cc"></font></span></a>)</em></p>GP0|#cd529cb2-129a-4422-a2d3-73680b0014d8;L0|#0cd529cb2-129a-4422-a2d3-73680b0014d8|Physical Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465 Unique Threat of Insiders<p>​It’s perhaps the most infamous incident of an insider threat in modern times. During the spring and summer of 2013, then-National Security Agency (NSA) contractor and Sharepoint administrator Edward Snowden downloaded thousands of documents about the NSA’s telephone metadata mass surveillance program onto USB drives, booked a flight to Hong Kong, and leaked those documents to the media.</p><p>An international manhunt was launched, Snowden fled to Moscow, hearings were held in the U.S. Congress, and new policies were created to prevent another insider breach. The damage a trusted insider can do to an organization became painfully obvious.</p><p>“If you’d asked me in the spring of 2013…what’s the state of your defense of the business proposition as it validates the technology, people, and procedures? I would have said, ‘Good. Not perfect,’” said Chris Inglis, former deputy director and senior civilian leader of the NSA during the Snowden leaks, in a presentation at the 2017 RSA Conference in San Francisco.</p><p>“I would have said that ‘we believe, given our origins and foundations, and folks from information assurance, that that’s a necessary accommodation,” he explained. “We make it such that this architecture—people, procedure, and technology—is defensible.”</p><p>Inglis also would have said that the NSA vetted insiders to ensure trustworthiness, gave them authority to conduct their jobs, and followed up with them if they exceeded that authority—intentionally or unintentionally—to remediate it. </p><p>“We made a critical mistake. We assumed that outsider external threats were different in kind than insider threats,” Inglis said. “My view today is they are exactly the same. All of those are the exercise of privilege.”</p><p>Inglis’ perspective mirrors similar findings from the recent SANS survey Defending Against the Wrong Enemy: 2017 Sans Insider Threat Survey by Eric Cole, SANS faculty fellow and former CTO of McAfee and chief scientist at Lockheed Martin.</p><p>The SANS survey of organizations with 100 to 100,000 employees found that it can be easy to conclude that external attacks should be the main focus for organizations. </p><p>“This conclusion would be wrong. The critical element is not the source of a threat, but its potential for damage,” Cole wrote. “Evaluating threats from that perspective, it becomes obvious that although most attacks might come from outside the organization, the most serious damage is done with help from the inside.”​</p><h4>Insider Threat Programs</h4><p>Incidents like the Snowden leaks and the more recent case of Harold Thomas Martin III, an NSA contractor accused of taking top secret information home with him, along with other incidents of economic espionage have raised awareness of the impact insider threats can have. However, many organizations have not adjusted their security posture to mitigate those threats.</p><p>In its survey, SANS found that organizations recognize insider threat as the “most potentially damaging component of their individual threat environments,” according to the survey. “Interestingly, there is little indication that most organizations have realigned budgets and staff to coincide with that recognition.”</p><p>Of the organizations surveyed, 49 percent said they are in the process of creating an insider threat program, but 31 percent still do not have a plan and are not addressing insider threats through such a plan. </p><p>“Unfortunately, organizations that lack effective insider threat programs are also unable to detect attacks in a timely manner, which makes the connection difficult to quantify,” SANS found. “From experience, however, there is a direct correlation between entities that ignore the problem and those that have major incidents.”</p><p>Additionally, because many are not monitoring for insider threats, most organizations claim that they have never experienced an insider threat. “More than 60 percent of the respondents claim they have never experienced an insider threat attack,” Cole wrote. “This result is very misleading. It is important to note that 38 percent of the respondents said they do not have effective ways to detect insider attacks, meaning the real problem may be that organizations are not properly detecting insider threats, not that they are not happening.”</p><p>The survey also found that the losses from insider threats are relatively unknown because they are not monitored or detected. Due to this, organizations cannot put losses from insider threats into financial terms and may not devote resources to addressing the issue, making it difficult or impossible to determine the cost of an insider attack.</p><p>For instance, an insider could steal intellectual property and product plans and sell them to a competitor without being detected.</p><p>“Subsequent failure of that product might be attributed to market conditions or other factors, rather than someone ‘stealing it,’” Cole wrote. “Many organizations, in my experience, are likely to blame external factors and only discover after detailed investigation that the true cause is linked back to an insider.”</p><p>And when organizations do discover that an insider attack has occurred, most have no formal internal incident response plan to address it.</p><p>“Despite recognition of insiders as a common and vulnerable point of attack, fewer than 20 percent of respondents reported having a formal incident response plan that deals with insider threat,” according to the SANS survey. </p><p>Instead, most incident response plans are focused on external threats, Cole wrote, which may explain why companies struggle to respond to insider threats.</p><p>Organizations are also struggling to deal with both malicious and accidental insider threats—a legitimate user whose credentials were stolen or who has been manipulated into giving an external attacker access to the organization. “Unintentional insider involvement can pose a greater risk, and considerably more damage, by allowing adversaries to sneak into a network undetected,” the survey found. “Lack of visibility and monitoring capability are possible explanations for the emphasis on malicious insiders.</p><p>To begin to address these vulnerabilities, SANS recommends that organizations identify their most critical data, determine who has access to that data, and restrict access to only those who need it. Then, organizations should focus on increasing visibility into users’ behavior to be proactive about insider threats. </p><p>“We were surprised to see 60 percent of respondents say they had not experienced an insider attack,” said Cole in a press release. “While the confidence is great, the rest of our survey data illustrates organizations are still not quite effective at proactively detecting insider threats, and that increased focus on individuals’ behaviors will result in better early detection and remediation.”​</p><h4>Trusted People</h4><p>When the NSA recruits and hires people, it vets them thoroughly to ensure their trustworthiness, according to Inglis.</p><p>“We ultimately want to bring some­body into the enterprise who we can trust, give them some authority to operate within an envelope that doesn’t monitor their tests item by item,” he explained. “Why? Because it’s within that envelope that they can exceed your expectations and the adversary’s expectations, your competitors’ expectations, and hope­fully the customers’ expectations. </p><p>You want them to be agile, creative, and innovative.”</p><p>To do this, the NSA would go to great lengths to find people with technical ability and possible trustworthiness. Then it or a third party would vet them, looking at their finances and their background, conducting interviews with people who knew them, and requiring polygraph examinations.</p><p>After the Snowden leaks, the U.S. federal government examined the work of its contract background screening firm—United States Investigations Services (USIS). USIS had cleared both Snowden and the Washington Navy Yard shooter Aaron Alexis. The government decided to reduce its contracted work with the company.</p><p>USIS later agreed to pay $30 million to settle U.S. federal fraud charges, forgoing payments that it was owed by the U.S. Office of Personnel Management for conducting background checks. The charges included carrying out a plot to “flush” or “dump” individual cases that it deemed to be low level to meet internal USIS goals, according to The Hill’s coverage of the case.</p><p>“Shortcuts taken by any company that we have entrusted to conduct background investigations of future and current federal employees are unacceptable,” said Benjamin Mizer, then head of the U.S. Department of Justice’s Civil Division, in a statement. “The Justice Department will ensure that those who do business with the government provide all of the services for which we bargained.”</p><p>This part of the process—vetting potential employees and conducting background checks—is where many private companies go wrong, according to Sandra Stibbards, owner and president of Camelot Investigations and chair of the ASIS International Investigations Council.</p><p>“What I’ve come across many times is companies are not doing thorough backgrounds, even if they think they are doing a background check—they are not doing it properly,” she says. </p><p>For instance, many companies will hire a background screening agency to do a check on a prospective employee. The agency, Stibbards says, will often say it’s doing a national criminal search when really it’s just running a name through a database that has access to U.S. state and county criminal and court records that are online.</p><p>“But the majority of counties and states don’t have their criminal records accessible online,” she adds. “To really be aware of the people that you’re getting and the problem with the human element, you need to have somebody who specializes and you need to…invest the money in doing proper background checks.”</p><p>To do this, a company should have prospective employees sign a waiver that informs them that it will be conducting a background check on them. This check, Stibbards says, should involve looking at criminal records in every county and state the individual has lived in, many of which will need to be visited in person.</p><p>She also recommends looking into any excessive federal court filings the prospective employee may have made.</p><p>“I’ll look for civil litigation, especially in the federal court because you get people that are listed as a plaintiff and they are filing suits against companies for civil rights discrimination, or something like that, so they can burn the company and get money out of it,” Stibbards adds.</p><p>Additionally, Stibbards suggests looking for judgments, tax liens, and bankruptcies, because that gives her perspective on whether a person is reliable and dependable.</p><p>“It’s not necessarily a case break­er, but you want to have the full perspect­ive of if this person is capable of managing themselves, because if they are not capable of managing themselves, they may not make the greatest employee,” she says.</p><p>Companies should ensure that their background screenings also investigate the publicly available social media presence of potential employees. Companies can include information about this part of the process in the waiver that applicants sign agreeing to a background check to avoid legal complications later on. </p><p>“I’m going to be going online to see if I see chatter about them, or if they chat a lot, make comments on posts that maybe are inappropriate, if they maintain Facebook, LinkedIn, and Twitter,” Stibbards says. </p><p>Posting frequently to social media might be a red flag. “If you find somebody on Facebook that’s posting seven, eight, nine, or 10 times a day, this is a trigger point because social media is more important to them than anything else they are doing,” Stibbards adds.</p><p>And just because a prospective employee is hired doesn’t mean that the company should discontinue monitoring his or her social media. While ongoing review is typically a routine measure, it can lead to disciplinary action for an employee who made it through the initial vetting process. For instance, Stibbards was hired by a firm to investigate an employee after the company had some misgivings about certain behaviors.</p><p>“Not only did we find criminal records that weren’t reported, but we then found social media that indicated that the employee was basically a gang member—pictures of guns and the whole bit,” Stibbards says.</p><p>It’s also critical, once a new employee has been brought on board, to introduce him or her to the culture of the organization—an aspect that was missing in Snowden’s onboarding process, Inglis said. This is because, as a contractor working for the NSA, regulations prohibited the U.S. government from training him. </p><p>“You show up as a commodity on whatever day you show up, and you’re supposed to sit down, do your work—sit down, shut up, and color within the lines,” Inglis explained.</p><p>So on Snowden’s first day at the NSA, he was not taken to the NSA Museum like other employees and taught about the agency’s history, the meaning of the oath new employees take, and the contributions the NSA makes to the United States.</p><p>“Hopefully there are no dry eyes at that moment in time, having had a history lesson laying out the sense of the vitality and importance of this organization going forward,” Inglis explained. “We don’t do that with contractors. We just assume that they already got that lesson.”</p><p>If companies fail to introduce contractors and other employees to the mission of the organization and its culture, those employees will not feel that they are part of the organization.​</p><h4>Trusted Technology</h4><p>Once trusted people are onboarded, companies need to evaluate their data—who has access to it, what controls are placed on it to prevent unwarranted access, and how that access is monitored across the network.</p><p>“The one thing I always recommend to any company is to have a monitoring system for all of their networks; that is one of the biggest ways to avoid having issues,” Stibbards says. “Whether it’s five people working for you or 100, if you let everybody know and they are aware when they are hired that all systems—whether they are laptops or whatever on the network—are all monitored by the company, then you have a much better chance of them not doing anything inappropriate or…taking information.”</p><p>These systems can be set up to flag when certain data is accessed or if an unusual file type is emailed out of the network to another address. </p><p>Simon Gibson, fellow security architect at Gigamon and former CISO at Bloomberg LP, had a system like this set up at Bloomberg, which alerted security staff to an email sent out with an Adobe PDF of an executive’s signature.</p><p>“He’s a guy who could write a check for a few billion dollars,” Gibson explains. “His signature was detected in an email being sent in an Adobe PDF, and it was just his signature…of course the only reason you would do that is to forge it, right?”</p><p>So, the security team alerted the business unit to the potential fraud. But after a quick discussion, the team found that the executive’s signature was being sent by a contractor to create welcome letters for new employees.</p><p>“From an insider perspective, we didn’t know if this was good or bad,” Gibson says. “We just knew that this guy’s signature probably ought not be flying in an email unless there’s a really good reason for it.”</p><p>Thankfully, Bloomberg had a system designed to detect when that kind of activity was taking place in its network and was able to quickly determine whether it was malicious. Not all companies are in the same position, says Brian Vecci, technical evangelist at Varonis, an enterprise data security provider.</p><p>In his role as a security advocate, Vecci goes out to companies and conducts risk assessments to look at what kinds of sensitive data they have. Forty-seven percent of companies he’s looked at have had more than 1,000 sensitive data files that were open to everyone on their network. “I think 22 percent had more than 10,000 or 12,000 files that were open to everybody,” Vecci explains. “The controls are just broken because there’s so much data and it’s so complex.”</p><p>To begin to address the problem, companies need to identify what their most sensitive data is and do a risk assessment to understand what level of risk the organization is exposed to. “You can’t put a plan into place for reducing risk unless you know what you’ve got, where it is, and start to put some metrics or get your arms around what is the risk associated to this data,” Vecci says. </p><p>Then, companies need to evaluate who should have access to what kinds of data, and create controls to enforce that level of access. </p><p>This is one area that allowed Snowden to gain access to the thousands of documents that he was then able to leak. Snowden was a Sharepoint administrator who populated a server so thousands of analysts could use that information to chase threats. His job was to understand how the NSA collects, processes, stores, queries, and produces information.</p><p>“That’s a pretty rich, dangerous set of information, which we now know,” Inglis said. “And the controls were relatively low on that—not missing—but low because we wanted that crowd to run at that speed, to exceed their expectations.”</p><p>Following the leaks, the NSA realized that it needed to place more controls on data access because, while a major leak like Snowden’s had a low probability of happening, when it did happen the consequences were extremely high. </p><p>“Is performance less sufficient than it was before these maneuvers? Absolutely,” Inglis explained. “But is it a necessary alignment of those two great goods—trust and capability? Absolutely.”</p><p>Additionally, companies should have a system in place to monitor employees’ physical access at work to detect anomalies in behavior. For instance, if a system administrator who normally comes to work at 8:00 a.m. and leaves at 5:00 p.m. every day, suddenly comes into the office at 2:00 a.m. or shows up at a workplace with a data storage unit that’s not in his normal rotation, his activity should be a red flag.</p><p>“That ought to be a clue, but if you’re not connecting the dots, you’re going to miss that,” Inglis said.  ​</p><h4>Trusted Processes</h4><p>To truly enable the technology in place to monitor network traffic, however, companies need to have processes to respond to anomalies. This is especially critical because often the security team is not completely aware of what business units in the company are doing, Gibson says.</p><p>While at Bloomberg, his team would occasionally get alerts that someone had sent software—such as a document marked confidential—to a private email address. “When the alert would fire, it would hit the security team’s office and my team would be the first people to open it and look at it and try analyze it,” Gibson explains. “The problem is, the security team has no way of knowing what’s proprietary and valuable, and what isn’t.”</p><p>To gather this information, the security team needs to have a healthy relationship with the rest of the organization, so it can reach out to others in the company—when necessary—to quickly determine if an alert is a true threat or legitimate business, like the signature email. </p><p>Companies also need to have a process in place to determine when an employee uses his or her credentials to inappropriately access data on the network, or whether those credentials were compromised and used by a malicious actor. </p><p>Gibson says this is one of the main threats he examines at Gigamon from an insider threat perspective because most attacks are carried out using people’s credentials. “For the most part, on the network, everything looks like an insider threat,” he adds. “Take our IT administrator—someone used his username and password to login to a domain controller and steal some data…I’m not looking at the action taken on the network, which may or may not be a bad thing, I’m actually looking to decide, are these credentials being used properly?”</p><p>The security team also needs to work with the human resources department to be aware of potential problem employees who might have exceptional access to corporate data, such as a system administrator like Snowden.</p><p>For instance, Inglis said that Snowden was involved in a workplace incident that might have changed the way he felt about his work at the NSA. As a systems administrator with incredible access to the NSA’s systems, Inglis said it would have made sense to put a closer watch on him after that incident in 2012, because the consequences if Snowden attacked the NSA’s network were high.</p><p>“You cannot treat HR, information technology, and physical systems as three discrete domains that are not somehow connected,” Inglis said.</p><p>Taking all of these actions to ensure that companies are hiring trusted people, using network monitoring technology, and using procedures to respond to alerts, can help prevent insider threats. But, as Inglis knows, there is no guarantee.</p><p>“Hindsight is 20/20. You have to look and say, ‘Would I theoretically catch the nuances from this?’”   ​</p>GP0|#28ae3eb9-d865-484b-ac9f-3dfacb4ce997;L0|#028ae3eb9-d865-484b-ac9f-3dfacb4ce997|Strategic Security;GTSet|#8accba12-4830-47cd-9299-2b34a4344465