Automating Customer Demos with Tines SOAR

October 29, 2018 in Blog

10 minutes of awkward introductions. 30 minutes of stock-slides. A 15-minute vanilla product demo (the only part you’re actually interested in). And finally, 5-minutes of rushed questions before your hard-stop at the top of the hour. Sound familiar? From our experience, this is how a vendor demo typically goes. We want demos of the Tines security automation platform to be different. In this post, we explore how we use our SOAR platform to automate customer demo preparation, ensuring we provide as valuable an experience as possible.

Tines security automation platform - demo preperation

Screenshot of Tines Security Automation platform showing customer demo automation story

Automating customer demo scheduling

(Left hand flow in story diagram)
In Automating Trial Creation, we described how we use Tines to automate interaction with non-traditional security tools like DigitalOcean and SendGrid. Another great example of the flexibility that the Agent-event integration architecture provides is how we use Calendly. When a customer books a Tines demo either on or, a Webhook agent receives the details from Calendly.

Depending on the event type from Calendly, Tines creates or removes an appointment on a shared calendar where we track customer demos. In addition, Tines updates our CMS, Hubspot, with the customer’s details.

Using the Tines Security Automation platform to customise demos

(Centre flow in story diagram)

Tines supports an unlimited number of automation use-cases. To ensure our demos are focused on a use-case that matters to the customer, we let them choose an automation story which we then walk through during their demo.

48 hours before their demo is scheduled, the customer will receive an email similar to the below.

Tines demo email sent to customer

By including a link to the Tines employee’s LinkedIn profile and a slide deck describing Tines, its core features and differentiation points, nine times out of ten we can skip the long intros and jump straight into the most valuable part: the product demo.

This email also allows the customer to “choose their own adventure”. Each of the story links uses a Tines Security Automation Platform prompt widget which will emit an event in our HQ Tines tenant when clicked. Although there are other ways we could collect this feedback, by using the prompt widget, we try our best to ensure the customer has had some exposure to the Tines Security Automation Platform before the demo even starts!


Automating customer context collection with the Tines SOAR platform

(Right hand flow in story diagram)
Allowing the customer choose the subject of their demo is a great start, however, there’s many more pieces of information that would help us tailor the demo with more granularity. For example: how mature is the customer’s security program, what tools do they use, and what are their current priorities?

Without asking the customer, there’s no way to know the answers to these questions. However, we can automate passive collection of open source intelligence that will at least help us develop a clearer understanding of the customer’s security program.

For example, in advance of a customer demo, we automate collection of the following with Tines HTTP Request Agents:

  • Basic information about the company (size, location, revenue)
  • Best guess at the demo requester’s LinkedIn profile
  • Previous Tines trials for both the demo requester and others at the company
  • Whether the company submits attributable suspicious URLs to public sandboxes
  • Current security-related job openings (these will often provide insight into tools the company uses)
  • Whether the company has DMARC enabled on their domain
  • Number of customer employees that have been included in known data breaches
  • Who is the company’s email provider
  • Where is the company’s website hosted

The end result is that 24 hours before the customer demo, the Tines employee scheduled to provide the demo will receive an email similar to that shown below:

Sample of Tines email sent to employee in advance of demo


The impact of a vendor demo is still largely reliant on the person providing the demo. Their ability to engage the customer, understand their requirements and answer their questions clearly and effectively is crucial. However, automating large parts of the demo preparation up-front, means we avoid many of the common pitfalls all too common in vendor demos.

The kind of laid-back shortcut described in this post isn’t going to change the world, but the efficiency and improved customer experience it provides adds up. With the Tines SOAR platform there is virtually no limit to the automation possibilities available. The agent-event integration architecture provides a consistent integration experience regardless of the target system. This is increasingly important as security teams automate interaction with non-traditional security tools.

To schedule your own Tines demo, click “Book a demo” from the main menu.

Developing a security automation proposal for fun and profit

October 24, 2018 in Blog

Information security analysts and engineers often feel the most direct benefits when a company deploys a security orchestration automation and response (SOAR) platform. There’s a reduction in repetitive work, less false positives to chase down, and the volume of alerts requiring investigation decreases. However, if you work in security operations teams, convincing your management team to undertake a SOAR trial isn’t always straightforward. Especially if you’re more used to PCAPs than value props.

In this post we share a methodology security operations center analysts and engineers can use to help them develop a compelling SOAR proposal. We also share a deck based on the methodology, which you can use to develop your pitch.

In Tines, we’ve successfully used this methodology when working with customers and prospects on their own security automation proposals.

Downloading and using the Security Automation Pitch Deck

The template pitch deck is available in Google Slides here. You can export to Powerpoint by clicking File -> Download As. How you use the deck will depend on you, your company culture, your leadership, etc. The slides provide a foundation you can build off when making your pitch.

Developing the SOAR proposal

Start with the problem statement

First we define the problem that our security automation and orchestration initiative will solve. When pitching a multi-purpose platform, like Tines, it’s tempting to compile an exhaustive list of problems the platform will solve. From our experience this will actually dilute the impact of your proposal. Instead, focus on a single problem you/your team feel today. Remember, your goal is to convince management to undertake a SOAR trial.

Problem statements and goals should always be SMART. For example, the following are actual problem statements we’ve seen companies address with Tines:

  • How can we ensure 100% of new incident response analysts are on-boarded with the correct access and tool permissions by next quarter?
  • What mechanisms can we deploy to ensure every suspicious email reported by employees is analysed for malicious content, in multiple sandboxes, by the end of this year?
  • Can we increase coverage of detection and response metrics by 50% in the next 12 months?
  • How can we ensure that any potential incidents reported by the executive leadership team are actioned within a strict SLA, by end of next quarter?
  • How do we increase utilisation of our existing threat intel investments by 25% in the next three months?
  • Can we increase our SOC analysts engagement by 50% in the next six months?
  • What steps should we take to ensure all standardized workflow steps in SOPs are being followed in 80% of cases within the next 3 months?
  • How do we ensure all analysts have at least 5 hours per week to spend threat hunting new data sources/threat intelligence feeds by this time next year?

Pick the problem which you feel most acutely in your organisation. In the deck we’ve chosen the following:

How do we reduce the volume of incidents requiring analyst investigation by 50% in the next 6 months?

Structure the problem

Although there’s probably a few different ways to tackle that problem, let’s suppose you believe that the best way is by automating incident investigation and response using a security automation platform. Your hypothesis therefore is:

We can reduce the volume of incidents requiring analyst investigation by 50% in the next 6 months by implementing a SOAR platform to automate repetitive analyst workloads.

To test this hypothesis, we’ll use a hypothesis tree. Here we’re defining all the statements which would have to hold true in order for our hypothesis to be valid. We also define how we test those statements.

Security Automation Platform Return on Investment

During analysis, if we prove even a single statement false, we prove the entire hypothesis to be incorrect. For example, if after performing the analysis (described in the next step) of historical incidents it’s revealed that only a tiny percentage are false positives, then a SOAR platform is not the solution to this problem. Of course, that’s not to say that a security automation platform isn’t the solution to another problem you’re experiencing.

Conduct analysis

Now that we have our completed hypothesis tree, we need to perform analysis to validate the supporting statements. For each statement, write down the corresponding question(s) that you need to answer in order to prove that statement true or false. Next define the type of analysis you need to perform in order to answer that question. The analysis can be quantitative (counting repetitive steps in security processes) and qualitative (speaking to analysts). Finally, what data do you need to support the analysis and where will you find it?

Security Automation Platform RoI Analysis

During the analysis step, it’s crucial that we avoid confirmation bias. A biased analysis brings your credibility into question and your pitch will be a lot less impactful.

Synthesise information and insights into action and buy-in

After answering all the questions through analysis, you should now have a compelling data set. As a next step, we’ll synthesis that data into insights that encourage action and buy-in. Related data points are grouped and rolled-up into insights which address the governing thought or problem statement.

Security Automation Platform efficiencies

Communicate your SOAR-focused solution

When you’ve completed your analysis and established insights that relate directly to the problem you’re aiming to solve, the final step is to package your proposal into a recommended solution. In this case, a trial of a security automation platform.

Security automation platform proposal


By applying the methodology described in this post, you’ll produce a much more convincing Security Automation Orchestration and Response trial proposal. Starting with a real problem felt by your security team and rigorously testing your proposed solution will ultimately demonstrate the value-add a SOAR initiative could produce.

If you’d like to discuss crafting a SOAR pitch for your organisation, we’d love to help, contact us here.

G Suite Alert Center

October 19, 2018 in Blog

In the last few days, Google began rolling out the G Suite Alert Center to all G Suite customers. It provides extensive visibility into threats detected in G Suite tenants. In this post, we explore how G Suite administrators and security teams can leverage security orchestration automation and response (SOAR) platforms, like Tines, to centralise, triage and respond to alerts from the G Suite Alert Center.

What is the G Suite Alert Center?

When Google announced the launch of G Suite Alert Center it was with the stated aim of providing “a single, comprehensive view of essential notifications, alerts, and actions across G Suite”. Admins can manage alerts more efficiently through the unified view that the alert center provides. Additionally, it provides insights that help them assess their organization’s exposure to internal and external security issues at the domain and user levels.

Out of the box, G Suite Alert Center includes the following types of alerts:

Accessing G Suite Alert Center

When logged into the G Suite Admin portal click “Security” and then chose “Alert Center”.

Accessing the G Suite Alert Center

On the next page, the G Suite Alert center displays all the alerts for your G Suite tenant. From this view, G Suite admins can view details of the alerts. Additionally, G Suite customers on the Enterprise plan can perform remediation actions from within the G Suite Alert Center itself. Later in this post, we’ll show how Tines can be used to automate the same remediation actions in real time regardless of your G Suite plan.

G Suite Alert Center Screenshot

Tines and the G Suite Alert Center

The alerts produced by the G Suite Alert Center provide valuable insight into potential security issues in G Suite tenants. However, the alerts are at their most valuable when, rather than being treated in isolation, we include them as part of a larger threat detection and response effort. By automating interaction with the G Suite Alert Center through Tines, we can use the alerts as both a threat source and as source of additional context when investigating other incident types. By using Tines to integrate with the G Suite Alert Center, we’re also centralising our response and aligning it to existing security response processes.

Connecting Tines to the G Suite Alert Center

In a previous blog post, we described the steps required to connect Tines to G Suite, we’ll use a similar method to connect to the Alert Center.

Enabling the G Suite Alert Center API

Follow these steps, based on those from to set up the Alert Center API:

  1. Create a service account that can be used by your G Suite application (see instructions on creating a service account).
  2. Download the key file, it will be a JSON file containing your private key and other sensitive information.G Suite Service Account Key File
  3. Enable the Alert Center API (for instructions, see section on enabling and disabling APIs).Enable G Suite Alert Center API
  4. Grant domain-wide access to the application, and therefore domain-wide delegation of authority to the associated service account (note that the 3-legged OAuth won’t work for the Alert Center API):
    1. Go to your G Suite domain’s Admin console (see instructions on signing in to your Admin console).
    2. Click Security.
      If you don’t see Security listed, select More controls at the bottom of the page, then click Security.
    3. Click Advanced settings.
    4. From the Authentication section, click Manage API client access.
    5. In the Client Name field, type the Client ID for the service account. This can be taken from the key file, in our example, this is a long number starting with 116.G Suite Admin API Access Pixelated
    6. In the One or More API Scopes field, enter the list of scopes that your application should be granted access to. In this case, type the following value:
    7. Click Authorize.

Creating a Tines Credential

Before Tines can connect to the G Suite Alert Center API, we need to configure a JWT credential type. For detailed instructions on how to use JWTs with G Suite, see here.

Use the information from your service account key file to fill in the required JWT fields. When you’re finished, the credential page will resemble the below.

G Suite Alert Center Tines Credential

After saving the credential, we can begin automating interaction with the G Suite Alert Center.

G Suite Alert Center Automation Story

Get a G Suite Auth Token

To begin orchestrating and automating activities in the G Suite Alert Center, we first need to retrieve an auth token. This will allow us interact with the Alert Center API. We’ll do this with a HTTP Request agent, configured as shown below:

When this agent runs, it will emit an event containing a bearer token which we will use in subsequent agents.

Get All Alerts from G Suite Alert Center

The G Suite Alert Center API is available at We’ll use the list call in a HTTP Request Agent to get all alerts associated with our G Suite tenant.

When this agent runs and there are alerts in our tenant, the G Suite Alert Center API will return an array of alerts.

Get alerts in last five minutes

Additionally, we can use filters to find alerts that match certain criteria. For example, the below HTTP Request Agent uses the date liquid filter to find alerts created in the last 5 minutes (current time in seconds – 300 seconds, converted into Google’s preferred Timestamp format: RFC 3339).

Get alert details from the G Suite Alert Center API

We can also use a HTTP Request Agent configured as below to find details about a specific alert, using its ID.

A sample automation story

In the above automation story, we’ve created a blueprint for getting started with automatic handling of G Suite Alert Center threats. To begin, at five minute intervals, we fetch all alerts. Next, an Event Transformation Agent is used to explode the array of alerts so each alert can be treated individually. Then we use several Trigger agents to emit events based on the alert type. Finally, we create an incident ticket based on the alert’s priority (this could be in Jira or another case management system) and add the alerts details to the ticket.

From here, it would be trivial add additional threat intelligence sources or automate data gathering log searches in a SIEM. We could also automate remediation activity like blocking malicious senders, quarantining devices and resetting compromised accounts all without requiring human intervention.


When considering threat detection tools for their technology stack, it’s easy for security operation teams and security operation centers to overlook assets like the G Suite Alert Center. However, as enterprises continue their move to the cloud, these non-traditional sources of threat intelligence and security alerts are becoming increasingly valuable.

By using security automation and orchestration tools like Tines to respond to threats surfaced by data sources similar to the G Suite Alert Center, an enterprise incident response team can ensure their standardized workflow is followed. Additionally, their detection and response is smarter, quicker and less prone to human-error.

To talk to a Tines rep about security automation and the G Suite Alert Center, contact us here.

Automating Tines trial creation

October 15, 2018 in Blog

No prizes for guessing that at Tines we rely heavily on automation to power our DevSecOps program. The Tines platform is not only at the heart of our internal security, IT and CI/CD programs, we also use it to manage customer trials. This includes automated droplet creation and destruction in Digital Ocean, automating Hubspot actions for contact and lead tracking, and even automated creation of DNS records.

In this post we describe some of the DevSecOps design decisions we’ve made and why Tines is a great platform if you need to automate your own complex processes.


Automation is a crucial component of DevSecOps. As you can see from the above diagram, we automate interaction with the following services to support Tines trials:

  1. Digital Ocean: used to host trial infrastructure including DNS.
  2. Hubspot: Tracks trial creation and allows us mange trial life cycle.
  3. Sendgrid: Sends welcome email to trial user and updates Tines support if something goes wrong.

Automating Digital Ocean Droplet Creation

All customer tenants, including Tines trials, are single-tenant and have their own dedicated infrastructure. We use Digital Ocean as our primary infrastructure provider. Additionally, to interact with the Digital Ocean API we use HTTP Request agents.

To speed-up the provisioning of trial tenants, Tines maintains a pool of pre-configured droplets labelled “trial-pending”. When a trial is requested (middle flow in the above diagram), Tines receives the request via a Webhook agent. After deduplicating the request, Tines takes a trial-pending droplet from the pool and begins to configure it based on the requesting user’s details (we use a simple deployment script for this). When no trial-pending droplets are available, an Event Transformation agent delays the flow while a droplet is built.

As part of the configuration process, Tines removes the “trial-pending” label and applies a “trial” label. Tines receives the deployment result via another Webhook agent (left hand side of the diagram), once the deployment is complete. Additionally, a new “trial-pending” droplet is created and added to the pool.

The deployment script sends the details to Tines via a Webhook agent (right hand side of the diagram) when it has completed. Tines uses the Digital Ocean API again to apply a label and create a new DNS entry for the droplet. Finally, Tines powers off the droplet while we wait for a new trial request.

Hubspot Automation

Once Tines has completed trial deployment, we use a HTTP Request Agent and the Hubspot API to check if the contact already exists. If it does, we associate the new trial to the contact. If the contact is new, we add their details to Hubspot and then associate them to the new trial.

SendGrid Automation

We use SendGrid to send transactional emails such as the welcome email we send when a trial is ready for the requester (sample below).

Tines welcome email automation devsecops

Additionally, if something goes wrong, for example, there’s no trial-pending droplets available, Tines support are notified using an Email agent.


For a detailed walkthrough of this story, talk to us here.

This is an updated walkthrough of how we manage Tines trials as part of our DevSecOps program. The original version is available here

VirusTotal API – Getting started with security automation

October 11, 2018 in Blog

In this post, we explore the VirusTotal API. We also look at how Tines and security automation can power-up your usage of the VirusTotal API.

Public vs. Private APIs

VirusTotal provide two API versions: a Public API and a Private API. The main differences between the two are the volume of queries available and the depth of information provided. The public API allows four queries per minute, and does not allow malware sample downloads. For the majority of uses, the public VirusTotal API will be sufficient and is what we’ll focus on in this post.

Creating a VirusTotal account

Access to the VirusTotal public API is free, to get started, you’ll need to obtain an API key. The API key allows you to make queries against the API. Click “Join our community” on the home page. Enter the required details and click “sign-up”.
Virustotal api sign up

Getting a VirusTotal API key

After you’ve created your account, click your username in the top right-hand corner of the page. Then, from the drop-down menu, select “My API key”.

On the next page, VirusTotal will display your API key. It will be a long, alpha-numeric string. As with all API keys, you should treat this key as a password, store it securely in a password manager and don’t embed it in scripts.

VirusTotal API examples

Now that we have an API key, we can start making queries against the public API. The following examples use cURL, if you don’t have cURL installed, you may want to use our Postman collection. Additionally, in all the below examples, you should replace $your-api-key with the api key from your VirusTotal profile.

Further details, including sample requests in Python are available from the VirusTotal Public API docs page.

Get a file scan report

If we have a suspicious file, we can check its status with the VirusTotal API. In the below example, replace $your-file-hash with the hash of the file you want to check.

If the file existed in its database, VirusTotal will return the results in a response similar to the below:

Scan a file with VirusTotal API

The below request sends a file located at /your/file/path to VirusTotal for scanning.

If the submission was successful, you will receive a response similar to the below. Notice that VirusTotal will include a field called “scan_id”. VirusTotal will not scan the file immediately, and will instead add it to a queue. As such, the “scan_id” field allows us check the status of the scan.

Example response:

Rescan an already submitted file

If a scan report is out-of-date, or you want the most recent status, you can rescan the file by submitting its hash.

Send and scan URL

VirusTotal mt1 mb0 provides the status of URLs in its database. To scan a URL, use a command similar to that shown below.

Retrieve URL scan report

Before submitting a URL for scanning, it’s best practice to check if the URL already exists in the database. Do this using a command similar to that shown below.

Retrieve URL scan report (scan if does not exist)

In the above command, if the URL doesn’t already exist in VirusTotal’s database, the response will be blank. By specifying “scan=1” in our request, we can tell VirusTotal to automatically scan the URL if it doesn’t already exist.

Retrieve domain report

Most people are aware that VirusTotal contains the status of URLs and files across anti-virus engines. However, not everyone is aware that it also maintains an extensive database of passive DNS data. To get the status of a domain, including passive DNS information via the VirusTotal API, use a request similar to the below.

Retrieve IP address report

To retrieve the status of an IP address, including passive DNS, submit a request similar to the below to the VirusTotal API.

Comment on file or URL

An extremely valuable, if slightly underused resource in VirusTotal, is the comments applied to files and URLs. Below, we’re indicating that the URL in question is a phishing page. As such, we’re providing valuable context to other VirusTotal users.

VirusTotal API Postman Collection

When experimenting with APIs, an extremely useful tool is Postman. Postman, provides a GUI that allows interaction with APIs. Additionally, it’s easy to create, troubleshoot and share API requests. At Tines, we’ve created a Postman collection with common VirusTotal API queries. You can download the collection, or clone the repository, from here and then import it into Postman.

Configuring Postman with the VirusTotal API collection

After you have imported the Postman collection, create a new environment variable called “apikey”.  Next, store your VirusTotal API key in this variable (see below).

Postman with virustotal api key

The Postman collection contains several API calls that can be customised based on your requirements. For example, replace the URL you wish to scan, or the IP address for which you wish to perform a passive DNS lookup.

Using the VirusTotal API with Tines

In our automating phishing and abuse inbox management tutorial series, we used the VirusTotal API extensively to analyse suspicious URLs and files. So, you may want to start there to understand a real world security automation application of the VirusTotal API.

Adding your VirusTotal API key to a Tines credential

The Tines credential widget allows storage of secret information so it can be safely included in agent bodies, without fear of disclosure. To add your VirusTotal API key to Tines, when signed-in to a tenant, choose “Credentials” -> “New Credential”.

Tines supports a variety of credential types, for the VirusTotal API, choose “text”. Next, choose a name for the credential, then enter your API key under “Credential value”.

VirusTotal API key in Tines credential

After saving the credential, we can now include it in agent configurations with the credential widget: {% credential Virustotal %}.

Submitting queries to the VirusTotal API using Tines

The Tines HTTP Request Agent (HRA) is used to integrate with 3rd-party APIs. Configuring a HRA to talk to the VirusTotal API is easy. First, create a new HRA: from anywhere in your tenant choose “Agents -> New Agent” from the main menu, then choose “HTTP Request Agent” from the agent type drop-down.

After configuring the common config, edit the options block to reflect the call you want to make to VirusTotal. For example, if you want to comment on a file or URL you would use the following options block:


If you wanted to retrieve an IP address report, you would use an options block similar to that shown below:

In this case, we’re using the GET method and calling the /ip-address/report endpoint. When the agent runs, it will emit the response from the VirusTotal API as a new event, see sample emitted event below:

Sample emitted Tines event Virus Total API


Avoiding VirusTotal API rate limits

As mentioned above, the VirusTotal API rate limits requests to four per minute. As a result, when you exceed this quota, VirusTotal will respond with an empty body and a HTTP status of 204.

Using Tines, we can account for event spikes and thus avoid exceeding our quota using Trigger Agents (TA) and an Event Transformation Agent (ETA) in delay mode.

In the diagram below, we use a HTTP Request Agent to query the VirusTotal API, then check whether the response hit a rate limit. If it did, that is the status was 204, we wait 20 seconds and try again.

Using Tines to Avoiding VirusTotal API Rate Limits

The configuration for these four agents is available in an importable story here.

Tutorial series: Automating abuse inbox management and phishing response (Part 3)

September 5, 2018 in Blog


In part two of our deep-dive series into end-to-end automation of abuse inbox management and phishing response, we added additional URL threat intelligence services and submitted suspicious attachments to multiple malware sandboxes. We collected the results of URL and eMail analysis and sent the user a prompt-enabled email.
In part three, we’ll expand our automation story by:

Automating user activity confirmation

When a user clicks the prompt URL, Tines emits an event similar to the below. This event contains details of the prompt response and the event that originally triggered the prompt.
Tines prompt event
Prompt event
We’ll use a trigger agent to detect this case. We’ll name the agent “User confirmed interaction with malicious email”, the options block will look like the below:

Here we’re configuring the trigger agent to emit an event when the prompt status is “clicked”, the value we set in the prompt widget in part 2 of the tutorial.
If this agent emits an event, it indicates that the victim interacted with the malicious email. As such, we’ll need to take remediating action. We could quarantine the user’s machine with an EDR tool or submit a reimage ticket with our helpdesk team. For now, logging out the user and locking the account will suffice.

Automating account takeover response

As the user interacted with the malicious email, we need to remediate a potential account takeover. We’re using OneLogin for Identity and Access Management, so to expedite response we’ll use two HTTP request agents to first log the user out, and then lock their account.
Included below, is the options block for the log out agent. Unsurprisingly, this is similar in form to the “Get victim details” agent. We take the ID of the user we want to log out from the incoming event, sample shown below:
Tines OneLogin Get User API Event
OneLogin Get User API Event

Once we’ve logged out the victim, we now want to lock their account. With the account locked, we can safely perform additional investigation. As seen from the OneLogin API docs, we do this by setting the user’s “status” to 3 with another HTTP request agent. The corresponding options block is below:

After reemitting an event and letting it rundown the story, we confirm the account’s locked by viewing the victim’s profile.
Locked user in OneLogin GUI
OneLogin GUI Showing Locked User Account
With the addition of these agents, this section of our automation story now looks as follows:
Account takeover automation diagram
Account Takeover Remediation Diagram

Automating SIEM Searches (web proxy)

When an employee receives a malicious email, it’s quite likely that other employees will have been targeted in the same campaign. However, they may not have reported the email to the abuse inbox. Or worse, they may not have realised it was malicious and as a result actioned the email. As such, when security teams identify a malicious URL received by an employee, they will often search network logs (firewall, webproxy, etc.) to try and identify additional users that may have visited the malicious URL. We can automate this process with Tines.

Finding phishing domain

In our example, we’re collecting web proxy logs in Splunk:
border--round box-shadow-wide vdw
Sample web proxy log from Splunk
Splunk indexes the domain visited by employees in a field called “cs_host”. By extracting the domain from the malicious URL and searching that field in Splunk, we can identify victims in our environment. We’ll use an Event Transformation agent in “Extract” mode to extract the domain from the malicious URL. The new agent will receive events from the existing “URL is malicious” agent, its options block is shown below:

TIP: When building complex regular expressions to use in an extract-mode Event Transformation agent, a regex testing service such as Rubular can be useful to help test and debug expressions. For example:

Automating Splunk searches

Next, we’ll use a HTTP request agent and the Splunk REST API to find any records of users visiting the extracted domain we’ve identified from the malicious URL. We’ll have this agent (named “Search SIEM for visits to malicious domain”), receive events from the “URL is malicious” agent.
See the agent options block below. Here, we’re using basic authentication with the credential widget, the POST method, and a form submission send the search to Splunk. We include the extracted domain in the search string. We are particularly interested in the source IP of the visiting user (contained in s_ip) and the time they visited. So, we include these in our search request.

When we dry-run this agent, we can see that Splunk returns a search id called “sid” which we can use to find the results of the search.
Splunk DHCP Search via REST API
Splunk DHCP Search via REST API

Getting search status using Splunk REST API

We’ll use another HTTP Request agent, this time called “Get search status”, to determine if the search has completed. As shown in the agent options block below, we’re submitting the sid via a GET request.

This agent returns the status of the search in a field called “dispatchState”. A value of “DONE” indicates that the search is complete, a value of “RUNNING” means the search is still in progress.
Event emitted after checking Splunk search status
Event emitted after checking Splunk search status
As with the URLScan and Virustotal agents in previous parts of this tutorial series, we’ll use a pair of Trigger agents to:
  1. Check again if the search is still running. We’ll call this agent “Search still in progress”
  2. Emit an event if the search is complete. We’ll call this agent “Search complete”
See the configuration for both these agents below:
Configuration for search status trigger agents
Configuration for search status trigger agents

Getting search results using Splunk REST API

When the search is complete, we can use another HTTP Request agent to get the search results. The options block for this agent is shown below, it receives events from the “Search complete” trigger agent.

As Splunk returns a different structure when there are one result and multiple results, we’ll build two Trigger agents to account for both cases. Their respective options blocks are shown below:

Where there are multiple search results, we’ll configure an Event Transformation Agent to explode the results so they can be treated individually. The options block for this agent, which we call “Explode search results is shown below”

Every time the malicious domain was visited from our environment, we now have access to the source IP address and time of the visit. After our updates, this section of our story looks like the below:
Automated SIEM searches diagram
Automated SIEM searches diagram

Automating SIEM searches (DHCP)

Unfortunately, the web proxy in our example does not log the asset name or the user that visited the malicious domain. The only information we have to determine this critical context is the source IP address and the time of the visit. Thankfully, we also store DHCP logs in Splunk, so we can automate collection of the host (asset) name with Tines.

DHCP logs in Splunk

As the response from Splunk is slightly different when there are single and multiple results, see below, we’ll use a pair of HTTP Request agents which will receive events from the “One search result” and “Explode search results” agents respectively. The options blocks for these agents are similar, the “Get host from DHCP logs single result” is shown below:

Breaking down Splunk API search

This agent is slightly more complex than our previous HTTP Request Agents, so let’s unpack the payload being submitted with a POST to Splunk:
  1. search: We want to find DHCP logs that are related to the identified source IP Address which we take from incoming events. As we want to find the machine which was assigned the IP, we only look at logs related to ‘New’ or ‘Renew’ leases. By specifying “head 1”, we’re taking the most recent event within our specified time range, that is the last asset with the IP address. Finally, we want the asset name in question, so we specify that field via “fields “Host Name””
  2. output_mode: Here we tell Splunk to use the JSON encoding scheme.
  3. earliest_time: We want to know what machine received/renewed the IP address before the user visited the malicious domain. We do this with Splunk’s earliest_time parameter. We take the timestamp of the visit to the malicious domain and convert it into a UNIX epoch time with the date liquid filter: date: ‘%s’. Next we use the minus liquid filter to subtract 12 hours (43200 seconds) from the time the user visited the malicious domain, this should be sufficient to find a relevant DHCP event.
  4. latest_time: We set the upper time limit of the search to the time the user visited the malicious site. In our case, the IP assignee does not matter after this point.
Difference between single and multiple Splunk event results
Difference between single and multiple Splunk event results

Getting status of DHCP search in Splunk

As with the web proxy search, we’ll use a combination of HTTP Request agents and Trigger agents to monitor the status of the search and fetch the results when it’s complete.
When the search successfully identifies the Host the event will look similar to the below:
Event emitted after matching host identified
Event emitted after matching host identified
Next we’ll include a trigger agent to detect if the search failed to return any results. This tells us a machine could not be found to match the visit to the malicious domain. We’ll do this with a Trigger agent and use a corresponding Trigger agent for when a result was returned. We could create an email agent to notify us when the DHCP search failed to get a result.

We’ve now collected the machine name. With this, we could isolate the machine from the network, take a forensic image, or perform additional log searches to collect other relevant context.

Automating asset management with Jira

We suspect that a user in our environment visited a malicious site. As such, we need to remediate the risk of their account bring compromised. For this, we need to retrieve the user associated with the asset we have identified through proxy and DHCP logs.
In our example, we have a dedicated Asset Management project in JIRA which contains all asset information in our environment.
Asset Management Project in Jira
Asset management project in JIRA
We’ll use a HTTP Request agent to fetch the asset owner from Jira. We’ll call this agent “Get asset owner” and configure it to receive events from the “Found matching host” Trigger agent.  This agent uses the JIRA Search API to find entries in the Asset Management JIRA project (AM) where the “Asset ID” field matches the host name value found in Splunk. We authenticate to JIRA using basic authentication.

When this agent runs, it will emit an event similar to the below. The field we’re most interested in is the assignee’s email address. This represents the owner of the asset and the account we need to remediate. In our case
Event from JIRA REST API
Event from JIRA REST API
We’ll reuse the agents we created previously to interact with OneLogin. You’ll recall that they fetch additional user information, log out the user, and lock their account.
Our final story diagram is below:
Automated phishing response diagram
Automated phishing response diagram


Over the course of this series, we’ve significantly evolved our automation story. From a simple five-agent story that analysed emails, to a comprehensive story containing over 50 agents. We’ve automated analysis of email links and attachments, determination of threat level, user activity confirmation, SIEM searches and more. However, in many ways, we could view this story as a jumping-off point for an even more capable story. We didn’t touch on case management, DMARC or additional threat hunting.
By automating this process in Tines, we have complete visibility into every step of the story. We also have unprecedented ability to expand, troubleshoot and scale.


Automated Information and Data Leaks

August 9, 2018 in Blog

Data leaks and information disclosure caused by employees is an issue with which security teams regularly contend. Committing credentials to Github is one of the more well-known ways this issue arises. Recently, posting sensitive data on public Trello boards has also made headlines. In this post, we explore a way security teams themselves often unintentionally expose sensitive company information.


Cyber security teams analyse URLs and files to determine if they represent a threat to their organisation. This requirement might arise, while investigating a suspicious email sent to an executive staff member. Or while reviewing web traffic from an infected endpoint. File and URL investigations can be time-consuming if performed manually. As such, creative security engineers have developed a number of solutions to automate and streamline this process in a safe way. We call these tools “sandboxes”.
Many of the most popular sandboxes (see six common examples below) are free and made publicly available to security teams and researchers. These sandboxes are incredibly useful resources and all security teams should be aware of them, however, like every tool, when misused, they may actually case data leaks in your company.
Although the exact mechanics of each sandbox varies, broadly they operate something like this:
  1. User (usually a security analyst) submits a suspicious file or URL to a sandbox.
  2. Sandbox analyses the behaviour of the submission (by opening the file or visiting the URL) and provides the user with analysis results allowing them to determine if the URL or file represents a threat.
  3. Sandbox stores and makes publicly searchable the results of the analysis so other companies may inform and protect themselves.

How security teams can cause data leaks

The problem of leaked data arises when a user submits a legitimate URL or file which leads to or contains, sensitive information, to a sandbox. By design, sandboxes record and make this sensitive information public. Additionally, as many public sandboxes provide APIs allowing programmatic submissions, sensitive information being “sandboxed” inadvertently by security teams is increased. For example, at Tines we regularly see security teams sandboxing every URL in every email that comes from an external source to an employee. This is fantastic from a threat detection perspective, but unless filtering and redaction occurs before sandbox submission, it’s almost certain that sensitive content will also be sandboxed.
To understand how widespread this subtle form of data leakage was, I spent a little time searching sandboxes for sensitive content. It’s important to point out that the services hosting the exposed content (Dropbox, Google Docs etc.) for are not at fault here. What happens to the URLs/emails after they are correctly sent to their intended recipient is largely out of their control. (The argument that some of this content should be behind additional authN/Z is outside the scope of this post.)

URLs containing email addresses

It’s not uncommon for URLs in emails to contain the recipient’s email address as a parameter. So, we started by looking at every URL that contained the string “email=”. Over a two-day period, we identified several hundred, unique, corporate email addresses.
Avoiding data leaks with automation

Password reset emails

Next, we searched for sandboxed URLs that contained strings which indicated the URL related to a password reset email. For example:
•  “resettoken”
•  “passwordreset”
•  “reset_password”
•  “new_password”
With a trivial amount of effort, we found around 50 still valid password reset links. Several of which were to well-known enterprise services. Additionally, we found password reset links for enterprise social media profiles. This is an interesting attack vector for opportunistic ATOs, but may be a little contrived for targeted attacks.
Avoiding data leaks with automation Screenshot showing compromised twitter account

File Sharing Services

A familiar use-case for file sharing services such as Dropbox, OneDrive, WeTransfer, etc. involves emailing a shared link to a file. A search for strings used in these links returned thousands of files with over-generous sharing settings, i.e.: “anyone with the link can access”. There were PPTs, docs and several other files containing what appeared to be sensitive company information.

Avoiding data leaks with automation Screenshot showing leaked sensitive company content

Electronic Signature Services

Services such as Adobe Sign, DocuSign, and DotLoop typically notify a user that they have a document awaiting signature. The notification email contains a link to a document, for example a sales contract or NDA. I searched several sandboxes for signature links and found hundreds of documents (both signed and awaiting signature).
Avoiding data leaks with automation Screenshot of leaked company contract Avoiding data leaks with automation Screenshot of leaked residential sale contract Avoiding data leaks with automation Screenshot of leaked purchase agreement


The increased availability of free and powerful URL scanners is a good thing. Sandboxes provide an accessible way for security teams,  who are often resource-constrained, to quickly collect important context around suspicious URLs and files.

In addition, submitting public crawls provides a forensic snapshot which allows security teams investigate common attack patterns and has even been known to provide valuable info on nation-state attacks. The purpose of this post is not to scaremonger or drive security teams to commercial, propriety sandboxes, but rather to shine a light on the risks security teams leveraging these valuable resources may not be aware of.

How to Avoid Automated Data Leaks

•  Don’t sandbox URLs or files from senders/domains which you can confidently say will be legitimate.
•  Some sandboxes provide a “private” feature to reduce the risk of data leaks. This completes the scan but does not store the results for public consumption.
•  Before submitting to a sandbox, avoid data leaks by replace sensitive information in URL parameters such as email addresses with benign placeholders.
•  If you are a service provider who delivers sensitive content over email, consider subscribing to feeds of recent scans from public sandboxes. When sensitive content which you delivered was sandboxed, notify the original recipient. In addition, remove access to the leaked content.
For information on how Tines can be used to safely automate analysis of suspicious URLs in any sandbox, contact us here.

Tutorial series: Automating abuse inbox management and phishing response (Part 2)

July 27, 2018 in Blog

In part 1 of our Automating abuse inbox management and phishing response video series, we introduced the key concepts of Tines and built a basic story. In part two of the series, we go deep and add a lot of capability to our story. Including:

  • Attachment analysis in VirusTotal
  • Real-time detonation of attachments in Hybrid Analysis
  • Analysis of URLs in
  • Collection of user responses with the Tines “Prompt Widget”

Shown below are the before and after diagrams:Phishing Diagram Before and After

Phishing Diagram Before and After

Download and import the Part 2 story file (right-click -> save as): phishing-response-abuse-inbox-management-part-2.


References: API Docs:

Hybrid Analysis:

Hybrid Analysis API Docs:

Virustotal file submission:

Tines Docs – Working with files:

Tines Docs – Prompt widget:

Microsoft Graph Security Automation

July 22, 2018 in Blog

NOTE: This blog was created using an old version of the Microsoft Application Registration Portal. For new instructions, please follow our new guide here

If your organisation leverages Office 365, Microsoft Graph provides programmatic access to a wealth of data which can be used to better inform decision making during threat detection and response. In this post, we explore how to enable Tines for Microsoft Graph security automation. So that you can use information such as Outlook emails, organisational structure, advanced threat analytics and more in your security automation program.

Step 1 – Getting an app ID and secret for use in Microsoft Graph

Authenticating for Microsoft Graph security automation

We will authenticate to Microsoft Graph using an app ID and secret. To get these, we need to register a new application in the Microsoft Application Registration portal. Sign in with your Microsoft credentials.

From the the “My applications” page, choose “Add an app”.

Enter an application name and press “Create”:

Microsoft Application Registration Portal - Name your app for Microsoft Graph Security Automation

From the application registration page, create a new application secret using the “Generate new password” button. Take note of the generated secret (you only see it once) and the application id, we will need these when creating a Tines credential later.

Selecting the Microsoft Graph platform

Under “Platforms” choose “Add Platform”. Select “Web”.

Microsoft Application Registration Portal - What platform for Microsoft Graph Security Automation

Finding your Tines OAuth2 callback URL for Microsoft Graph secure access

We now need to specify where Microsoft Graph should send authentication responses for this application. Under “Redirect URLs”, enter the Tines Oauth2.0 callback URL for your Tines tenant. This takes the form https://<tenant-name> and is available from the “Create credential” page under the Oauth2.0 type in your tenant. Mine is shown below:

Creating a Tines OAuth2 credential for Microsoft Graph Security Automation

Finally, we need to define the permissions this application should have, this is also referred to as the OAuth2.0 scopes. Permissions include everything from creating tasks to sending emails. A full list of permissions is available in the Microsoft Graph docs.

It is best security practice to provide the application with the minimum amount of permissions necessary to perform its required task(s).

In our example, we want to read Outlook emails using Tines, so we’ll include the permission under “Delegated permissions”. A sample configuration is shown below. Press ‘save’.

Microsoft Application Registration Portal - complete example of Microsoft Graph Security Automation

Step 2 – Creating a Tines credential

Next, we now need to create a Tines credential which corresponds to the application we’ve just registered. We will use this credential in our agent’s to access Microsoft Graph security data. From your Tines tenant, choose “Credentials” and “New Credential”. From the “Type” dropdown, choose OAuth2.0. Give your credential a name, I used “Ms_graph”, but you can use whatever makes sense in your situation.

Under “client id” and “client secret” in the “Create credential” page, enter the “application id” and “application secret” from the application you just registered in Step 1.

Under scope, we’ll enter a space separated list of the permissions we used when registering the Graph application. That is: and Additionally, we will include the offline_access scope. This scope will allow Tines request fresh access tokens as necessary.

From the “Grant type” dropdown, choose “authorization_code”.

Under “Oauth url” and “Oauth token url”, we need to tell Tines where to request authorization and access tokens. For Microsoft Graph, these URLs take the following form:

Though, there are a number of ways to determine your office 365 tenant id, the simplest is probably to use the service, which uses some sort of voodoo to retrieve the id from your domain name. Plug your tenant ID into the above URLs.

Having entered all the required information into the “Create credential” page, it should look similar to the below:

Tines - example OAuth2 credential for Microsoft Graph Security Automation

When you select “Save credential”, Tines will redirect to a Microsoft account consent page, where you will be asked to authorize the application’s access to your account.

After accepting the request, Microsoft will securely redirect you to Tines.

Tines OAuth2 consent flow for Microsoft Graph Security Automation

Credential auth flow

Step 3 – Creating a Tines agent

We now have everything we need to connect Tines and Microsoft Graph. So, we’ll now use a standard Tines HTTP Request Agent to read emails from an Outlook account.

The Graph Explorer is a very useful tool for understanding how to interact with the data in Graph. Using the Graph Explorer, we can read Microsoft Graph security data. In addition, we can see that in order to read Outlook messages, we need to send a GET request to the following URL:

As such, we will create a HTTP Request with the following Options block:

Consequently When this agent runs, Tines will replace the credential widget ({% credential Ms_graph %}) with a valid access token. The event emitted by this agent will contain emails from my Outlook inbox. For example:

Tines - Event generated by Microsoft Graph Security Automation


In conclusion, Microsoft Graph exposes an extraordinarily rich repository of data and capabilities. By using the Tines advanced security automation platform to automate interaction with Graph, security analysts can automate their Microsoft Graph security tasks, and perform more thorough threat detection and response. Of course, all while simultaneously freeing up analyst resources and allowing them refocus on higher-impact activities.


Microsoft Graph quickstart guide:

NOTE: This blog was created using an old version of the Microsoft Application Registration Portal. For new instructions, please follow our new guide here

Tutorial series: Automating abuse inbox management and phishing response (Part 1)

July 4, 2018 in Blog

Managing abuse inboxes and phishing response across an enterprise is often a complex and manual operation. In this multi-part video series, we provide detailed, step-by-step instructions on using the Tines Advanced Security Automation Platform to automate the entire process end-to-end, resulting in a more efficient and effective response.

Part 1 of the series covers:
1) Reading emails from an IMAP server
2) Extracting all URLs from the body of emails
3) Checking status of URLs in Virustotal
4) Using VT analysis to decide if the URL represents a threat
5) Contacting the victim with the results of our analysis

Tines trial:
Virustotal API:
OneLogin Developer account:

Download and import the Part 1 story file (right-click -> save as): phishing-response-abuse-inbox-management-part-1.