<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1076912843267184&amp;ev=PageView&amp;noscript=1">

RL Blog

|

Why Build a Local Threat Intelligence Infrastructure with Automated Static Analysis?

automated-static-analysis-newsroom

At a recent FS-ISAC event, I listened to a cybersecurity analyst explain that a majority of the global (external) threat intelligence he receives is useless for his organization. “It just doesn’t apply to us,” he said, “and of the data that I think does apply, I have no fast and easy way to confirm that it does. It is still a lot of guesswork.”

At the same time, security teams have instrumented a lot of systems to gather information about what is happening in their environments. These systems collect information from endpoints, networks, etc. and send the information to SIEMs or next-generation databases for analytics. But, the information collected lacks depth and provides only a cursory snapshot of a select few observables that focused on by the different collecting technologies. For example, a complete picture from an endpoint solution is limited to the few supported executable platforms and only on objects that may have executed. Objects that are lurking unseen would not be noted or analyzed.

Similarly, when capturing information from dynamic analysis solutions (sandboxes), the information is limited to objects that can be detonated (say Windows files) and exhibits interesting behavior deemed worthy of collection. Evasive objects, unsupported platforms, or statically embedded content is not captured. Finally, should anything change (as it frequently does) and something unknown be successfully identified, all information related to the malicious object that was bypassed, dropped, or misclassified would be out of the reach for security response teams.

Let Us Not Forget

No matter how complete your cybersecurity detection and defenses are, some malware is going to slip into your environment unseen. No organization can hit the magic 100% detection number, and the adversaries know this and know modified (polymorphic), or zero-day attacks will get through sooner or later. For this reason, security teams must constantly search for, find and contain the unknown malware that has bypassed defenses.

As the FS-ISAC speaker made clear, there is no easy way to infer the value of global threat intelligence to what is relevant to a specific organization at a particular point in time. Global intelligence provides ample context about lots of files, but understanding which ones are relevant to your organization and at what time involves inefficient guesswork. What if global threat intelligence has alerted you to an event that has entered your network days if not weeks before? You would have no reliable record of the event or reason to believe that this global threat intelligence insight is relevant to you. 

But what if information could be captured in a rich and in-depth manner covering all events and objects that an organization is seeing?

This information could be correlated with a prioritized view of risks, threats, and anomalies, and then linked to the associated files. A security analyst would thus have valuable and relevant locally derived intelligence to work with. This locally collected information (Local Threat Intelligence) would give a cybersecurity analyst precise ways to match relevant global threat intelligence to what is important to their organization at that time. Such information would retrospectively adjust disposition based on global intelligence changes. It would support the discovery of locally relevant and globally unknown threats. It would make threat intelligence actionable, and improve the entire threat detection and response process regarding speed, accuracy, and overall effectiveness.

Looking Inside to Make Global Intelligence Valuable

In today’s threat environment it is necessary to deploy an internal infrastructure that can find, monitor, examine and contain (via remediation, blocking or deception) all files, objects, and transactions that are relevant to the enterprise’s well-being. Analysts and threat hunters could then analyze all files using a common methodology, regardless of operating system or file type. An analyst would be able to search using relevant attributes (e.g., hashes, strings, behavior attributes, similarity, etc.) to identify sleeper or unknown, unwanted content. From there, because all local threat context is known, analysts could identify the causes and the true extent of any given campaign.  But deploying that continuous monitoring infrastructure and connecting existing security systems into it remains elusive in operational environments.

Automated Static Analysis to the Rescue

Hear that bugle in the distance, those thundering hooves? Well, that’s your reinforcements arriving. A better analogy is a force-multiplier – like adding airpower to cover a ground assault. That is what automated static analysis is like. It sees more, moves faster and delivers better results over a threat surface than existing systems can do on their own. This technology offers powerful ways to unpack and decompose almost any object and investigate its inner workings. So how does that capability, as cool as it sounds, help with creating an internal continuous monitoring infrastructure?

The answer is in the two things automated static analysis does very well. It decomposes files regardless of their platform and enables internal views of those files, and it does this very fast. Fast enough that millions of files, say from an email server, can be run through a static analysis engine in just seconds, and be analyzed and classified as good, bad, suspicious or unknown (with a risk score). More importantly, metadata about those files can be collected and stored to serve as the foundation of a local threat intelligence infrastructure. For organizations utilizing data lake strategies, automated static analysis allows them to collect detailed attributes on millions of files a day in their environments for use in future hunting and correlation.

Data lakes provide comprehensive visibility as they store massive amounts of correlated data which can be used for advanced searches focused on identifying suspect content that was not flagged by other detection methods when it was received. Additionally, some organizations are building sizeable file lakes to store all malicious, suspicious and otherwise unknown content. This content is being used for retrospective hunting through custom, law enforcement or regulator provided YARA rules.

A Vision of the Future

Envision this capability in your organization. With a local threat intelligence source in place you now have a new working model that is more efficient and effective. For example, when law enforcement or regulators request that you look for malware with specific characteristics, an analyst or threat hunter can write a rule, search through a data lake and file lake and find specific suspect files. When other security tools provide hints about high-risk files, rules can be created that, in turn, enable a speedy search of the entire local data lake or file lake to reveal all locally discovered files that the rule describes. Subsequent investigations are far more focused, efficient and productive.

Another practical example would be what if there is a spike in ransomware attacks that utilize a new backdoor variant delivered through certain PEs on Win OS, and your threat intelligence feeds have samples or additional information on it? Now you can query your local threat intelligence data to see if you have exposure to this attack. And if so, you can write YARA rules based on discovered samples to search out and find all variants of the malware to isolate and contain it.

A successful threat intelligence program provides critical information on events and objects that are touching your organization. Being aware of what happens elsewhere is helpful but nowhere near as relevant as what takes place in your organization. Local file intelligence, created by automated static analysis, stored in a data lake and a file lake, and indexed so that it can be searched and utilized for threat hunting makes global intelligence feeds truly actionable. It also creates visibility to file-level threats that no other system is capable of, and it acts as a force-multiplier for all other systems in your security infrastructure.

ReversingLabs specializes in the development and deployment of large-scale, high-volume file analysis and threat hunting systems. We use these same tools to build and curate our industry-leading file intelligence service TitaniumCloud.

Here is more information about our automated static analysis engine, our enterprise scale, high volume analysis and classification product, our malware hunting, and analysis product, and our file intelligence service.

Hope to see you at Black Hat in Las Vegas. We will be there – Booth 1613

More Blog Posts

    Special Reports

    Latest Blog Posts

    Chinese APT Group Exploits SOHO Routers Chinese APT Group Exploits SOHO Routers

    Conversations About Threat Hunting and Software Supply Chain Security

    Reproducible Builds: Graduate Your Software Supply Chain Security Reproducible Builds: Graduate Your Software Supply Chain Security

    Glassboard conversations with ReversingLabs Field CISO Matt Rose

    Software Package Deconstruction: Video Conferencing Software Software Package Deconstruction: Video Conferencing Software

    Analyzing Risks To Your Software Supply Chain