Dear fellow security professionals
We have now released these 2 first elements of the OpenTIDE (Open Threat Informed Detection Engineering) project as open source under the EUPL (European Union Public License).
What we’re releasing today are the following 2 elements of OpenTIDE: the repo StartTIDE – which includes everything you’d need to get started using the framework and the repo CoreTIDE, which contains the technologies, automations and schemas we’ve built to help your SOC teams work together towards reaching a sufficient detection coverage using an as-code approach - where automation steps in and lifts a lot of the burdens of tasks that analysts have had to do manually until now.
OpenTIDE
A Detection Engineering Hub
OpenTIDE is an end-to-end as-code framework meant to enable a Detection Engineering team to scale and improve the quality of both the process of receiving unstructured data sets of intelligence and turning them into actionable knowledge (driving both the DE process and supporting Incident Response with a knowledge graph of high quality, contextualized information for triaging/hunting/pivoting/investigating) and the detection development process.
OpenTIDE is a :
- Threat-Driven, top-down philosophy to guide how detections should be built
- Framework to perform modelling and detection as-code, bringing powerful DevOps concepts to Detection Engineers
- Automation Engine to validate, deploy, document objects and build a comprehensive data schema
- Shareable data standard that helps the community to help DE teams collaborating
Introducing DetectionOps
OpenTIDE introduces the DE world to new powerful DevOps processes, effectively creating a new DetectionOps methodology. OpenTIDE enables an organization to create a knowledge graph of threat actors, atomic threat actor TTPs (called threat vector models), detection objectives and detection rules implemented, in an as-code implementation. The threat vector model concept helps make intelligence actionable for detection engineers by breaking intelligence down to the right level of granularity (often the procedural level, in some cases above or below that).
Detection Objectives
Detection objectives help formulate detection ideas that can then be validated through hunting or emulation processes and turned into detection rules. OpenTIDE also for the first time offers a common methodology and data model for detection engineering and allows an organization to share and work on threats and detections collaboratively. OpenTIDE works out of the box if you use Gitlab and ships with all the needed CI configurations to create pipelines. For other version control systems, you will need to create your own pipeline definitions, but reusing the existing automation and overall framework.
Visual Studio Code gets configured into a rich IDE to help you write models more effectively
Who should use OpenTIDE ?
OpenTIDE pivots around the detection engineering (DE) practice. If your DE team either is using or interested by detection-as-code and eliminating manual processes as much as possible, then OpenTIDE is the place to start.
The framework can be adopted progressively, by first adopting either the modelling practice or detection-as-code practice before moving the rest of the processes, should that be more practical.
Once a DE team has installed the framework and started to learn about it, it would make sense to involve your other SOC teams such as CTI (these can often be natural owners of or contributors to the TVM models, may contribute to the detection objective models (CDMs). Additionally, it makes sense to involve DFIR teams, red teams as these can naturally feed intelligence into the detection backlog and help prioritize the backlog (with CTI).
What should you do with OpenTIDE to try it out?
The first commits
Clone StartTIDE into your GitLab repository (or GitHub, or any VSC + CI/CD automation, but then you’ll need to write your workflow pipelines using CoreTIDE Orchestration scripts which automate the rest).
The .gitlab-ci.yml scripts details all the few CI variables you need to configure in your project settings (mainly supplying SSH keys in Gitlab CI Variables).
By default, CoreTIDE gets automatically fetched from our public repo and injected into your pipelines, but you may also clone CoreTIDE locally, uncomment a few lines on .gitlab-ci.yml and add the location of the local repo.
CoreTIDE ships everything required, StartTIDE base pipeline injects CoreTIDE which then takes over.
- Clone and open the repository using VSCode (StartTIDE contains all the workspace configurations needed, you just need to download the recommended extension).
- Find some intelligence that you find relevant.
- Create a new branch, then create a threat vector model (TVM) in VSCode in yaml (Using CoreTIDE schemas to autocomplete and validate inputs in VSCode).
- Sync it, ‘review it’ and merge it to your main branch.
- Now, create a detection objective for the TVM (CDM model) and link in to the TVM model. Sync it and merge it to main.
- Create a detection rule model (MDR) and link it to the CDM. Sync it and merge it to main.
Now you’ve got the basics down, you’ve managed to build the start of a knowledge graph. Now go see your Gitlab Project wiki – automatically built and populated by the CI/CD pipeline. The Home page contains a fully indexed, searchable and metadata rich knowledge base (of only the entries you’ve added).
Deploy with Detection-as-Code
Set up your as-code detection deployment pipeline by simply enabling the system under the Configurations/system folder and try to deploy a detection to one of the currently supported platforms, if you’re using one of these. Or contribute a new plugin to support one of the detection platforms we currently have not yet built support for (reach out so we can help you get started on this).
The Configuration file also shows you which CI variables you should setup (secrets, API endpoints etc.) CoreTIDE deployment pipeline will deploy Staging MDRs in a Merge Request, and Production MDRs after their merge to main, automatically promoting the statuses. MDR Statuses allow to configure automatic configurations switches (for example in Splunk, to not raise notable events in STAGING, but do it in ACCEPTANCE).
If you wish to scale up, we recommend making a first migration pass, by porting detection rules into OpenTIDE as MDRs and start to manage content from a single source of truth. If you also adopt threat and detection modelling, as the knowledge graph progresses, you will see increased opportunities to connect past existing content to relevant threats and analyze your detection gaps more efficiently. After you’ve taken a look around, feel free to engage with us on code.europa.eu or reach out to us via email or social media. Help us help defenders to think in graphs, faster.
Attack Chain automatically built by the documentation engine, reconciliating any links from any TVM to any other
Further adoption
We’d propose that you play with the framework until you’re comfortable with it, and then we’d say ‘think hard about whether or not a tool like this fits into your strategic planning – now or later’. You need someone who knows some basic DevOps concepts or is willing to learn it. What’s the impact of the DE automation on a DE team?
The automation allows DE engineers/analysts to no longer spend their time updating documentation, but to think about great detections. When it comes to the pure DE aspect, there’s a learning curve of course in learning to use GitOps as a DE team, but after the (relatively short, maybe 2 weeks if you use it actively) learning first short time, the value to your DE team should become clear. Your detections are now documented and linked in the knowledge graph to what they need to be linked to.
If you can integrate the DE process with your other SOC teams, then the value of the other teams to the business should increase, as their output (RT exercises, DFIR lessons learned, CTI intel) becomes much more actionable to your DE team, and with shorter turn-around times. In GitOps, you’ll also be able to go back to these teams and show them your actual improvements based on their inputs. If you can make this work, then maybe you can also look at the collaborative angle – contribute open source TLP:CLEAR models or modules to OpenTIDE.
Pull models into your own once this becomes a real thing. Collaborate with other entities to each analyze new threat actor data and build models collaboratively ‘you take this new Fortinet vuln, we take the new Ivanti, then we aim to share models in 24?’. Or think of your own ways to benefit from a normalized collaborative knowledge graph.
Upcoming
- Coming soon: A white paper with essential information, analysis of previous research, architectural considerations and decisions and so on
- Coming soon: ShareTIDE – a repo that will contain the TLP:CLEAR models that exist so far, powered by a new automated sharing module (based around TLP metadata present on every model) initially only sourced from the EC team, but hopefully others will want to contribute models in TLP:CLEAR. This is where we help the most defenders, instead of in closed knowledge sharing communities (which obviously have their purpose and justification also)
- A way for the community to contribute to the project/repos and hopefully a user group/community management platform
- New deployment engines
- New model types to connect offensive software and its submodules to threat actors and to TVMs that execute the capabilities of submodules
- Further evolution and releases of the framework elements
- Helpful how-to’s and further documentation.