Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

480 posts

Although the RSA NetWitness platform gives administrators visibility into system metrics through the Health & Wellness Systems Stats Browser, we currently do not have a method to see all storage / retention across our deployment in a single instance or view.

 

Below you will find several scripts that will help us gain this visibility quickly and easily.

 

How It Works:

 

1. Dependency: get-all-systems.sh (attached) both v10 and v11 version for your particular environment. Please run this script prior to running the get-retention.py as it requires the 'all-systems' file which contains all of your appliances & services.

2. We then read through the all-systems file and look for services that have retention e.g. EndpointLogHybrid, EndpointHybrid, LogHybrid, LogDecoder, Decoder, Concentrator, Archiver.

3. Finally we use the 'tlogin' functionality of NwConsole to allow cert-based authentication, thus, no need to run this script with username/password as input to pull database statistics and output the retention (in days) for that particular service.

 

Instructions:

 

1. Run ./get-all-systems_v10.sh (for 10.x systems) or ./get-all-systems_v11.sh (for 11.x systems)

2. Run ./get-retention.py  (without any arguments)

 

Sample Run: 

 

Please feel free to provide feedback, bug reports etc...

 

There have been many improvements made over the past several releases to the RSA NetWitness product on the log management side of the house to help reduce the amount of unparsed or misparsed devices.  There are still instances where manual intervention is necessary and a report such as the one provided in this blog could prove valuable for you.

 

This report provides visibility into 4 types of situations:

 

Device.IP with more than 1 device.type

Devices that have multiple parsers acting on them over this time period, sorted most parsers per IP to least

 

Unknown Devices

Unknown devices do not have a parser detected for them or no parser is installed/enabled for it.

 

Device.types with word meta

Device types with word meta indicate that a parser has matched a header for that device but no payload (message body) has matched a parser entry.

 

Device.type with parseerror

Devices that are parsing meta for most fields but have parseerror meta for particular metakey data. This can indicate the format of the data into the key does not match the format of the key (invalid MAC address into eth.src or eth.dst - MAC formatted keys), text into IP key

 

Some of these categories are legitimate but checking this report once a week should allow you to keep an eye on the logging function of your NetWitness system and make sure that it is performing at its best.

 

The code for the Report is kept here (in clear text format so you can look at the rule content without needing to import it into NetWitness):

GitHub - epartington/rsa_nw_re_logparser_health 

 

Here's a sample report output:

 

Most people don't remember the well known port number for a particular network protocol. Sometimes we need to refer to an RFC to remember what port certain protocols normally run over. 

 

In the RSA NetWitness UI, the well known name for the protocol is presented in the UI but when you drill on it you get the well known port number. 

 

This can be a little confusing at times if you aren't completely caffeinated.☕

 

Well here's some good news, you an use the name of the service in your drills and reports with the following syntax:

 

Original method:

Service=123 

 

New method:

Service="NTP"

 

You may get an error about needing quotes around the word however the system still interprets the query correctly.

 

 

This also works in profiles:

 

An in the Reporting Engine as well:

 

Good luck using this new trick!

   

(P.S you can also use AND instead of && and OR instead of || )

RSA NetWitness v11.2 introduced a very useful feature to the Investigation workflow with the improvement of the Profile feature.  In previous versions the Profile could have a pre-query set for it along with the meta and column groups, but you were locked to using only those two features unless you de-activated your profile.

 

With v11.2 you are able to keep the pre-query set from the profile and pivot to other meta and column groups.  This ability allows you to set the Profiles as bookmarks or starting points for investigations or drills.  Along with the folders that can be set in the Profile section to help organize the various groups that help frame investigations properly.

 

Below is a collection of the profiles as well as some meta and column groups to help collect various types of data or protocols together.

 

GitHub - epartington/rsa_nw_investigation_profiles 

 

Protocols

Medium

Log Device Classes

UEBA

 

Let me know if these work for you, I will be adding more as they develop to the github site so check back.

Often times, RSA NetWitness Packet decoders are configured to monitor not only ingress and egress traffic, but also receive internal LAN traffic as well.  On a recent engagement, we identified a significant amount of traffic going to TCP port 9997.  It did not take long to realize this traffic was from internal servers configured to forward their logs to Splunk.

 

The parser will add to the 'service' meta key and write the value '9997'.  After running the parser for several hours, we also found other ports that were used by the Splunk forwarders.  

 

While there wasn't anything malicious or suspicious with the traffic, it was a significant amount of traffic that was taking up disk space.  By identifying the traffic, we can make it a filtering candidate.  Ideally, the traffic would be filtered further upstream at a TAP, but sometimes that isn't possible.  

 

If you are running this parser, you could also update the index-concentrator-custom.xml and add an alias to the service types.  

 

 

...

 

 

If you have traffic on your network that you want better ways to identify, let your RSA account team know.  

 

Good luck, and happy hunting.

I helped one of my customers implement a use case last year that entailed sending email alerts to specific users when those users logged into legacy applications within their environment.

 

Creating the alerts for this activity with the ESA was rather trivial - we knew which event source would generate the logs and the meta to trigger against - but sending the alert via email to the specific user that was ID'd in the alert itself added a bit of complexity.

 

Fortunately, others have had similar-ish requirements in the past and there are guides on the community that cover how to generate custom emails for ESA alerts through the script notification option, such as Custom ESA email template with raw event payload and 000031690 - How to send customized subjects in an RSA Security Analytics ESA alert email.

 

This meant that all we had to do was map the usernames from the log events to the appropriate email addresses, enrich the events and/or alerts with those email addresses, and then customize the email notification using that information.  Mapping the usernames to email addresses and adding this information to events/alerts could have been accomplished in a couple different ways - either a custom Feed (Live: Create a Custom Feed) or an In-Memory Table (Alerting: Configure In-Memory Table as Enrichment Source) - for this customer the In-Memory Table was the preferred option because it would not create unnecessary meta in their environment.

 

We added the CSV containing the usernames and email addresses as an enrichment source:

 

....then added that enrichment to the ESA alert:

 

With these steps done, we triggered a couple alerts to see exactly what the raw output looked like, specifically how the enrichment data was included.  The easiest way to find raw alert output is within the respond module by clicking into the alert and looking for  the "Raw Alert" pane:

 

Armed with this information, we were then able to write the script (copy/pasting from the articles linked above and modifying the details) to extract the email address and use that as the "to_addr" for the email script (also attached at the bottom of this post):

#!/usr/bin/env python
from smtplib import SMTP
import datetime
import json
import sys

def dispatch(alert):
    """
    The default dispatch just prints the 'last' alert to /tmp/esa_alert.json. Alert details
    are available in the Python hash passed to this method e.g. alert['id'], alert['severity'],
    alert['module_name'], alert['events'][0], etc.
    These can be used to implement the external integration required.
    """

    with open("/tmp/esa_alert.json", mode='w') as alert_file:
        alert_file.write(json.dumps(alert, indent=True))

def read():
    #Parameter
    smtp_server = "<your_mail_relay_server>"
    smtp_port = "25"
    # "smtp_user" and "smtp_pass" are necessary
    # if your SMTP server requires authentication
    # used in "smtp.login()" below
    #smtp_user = "<your_smtp_user_name>"
    #smtp_pass = "<your_smtp_user_password>"
    from_addr = "<your_mail_sending_address>"
    missing_msg = ""
    to_addr = ""  #defined from enrichment table

    # Get data from JSON
    esa_alert = json.loads(open('/tmp/esa_alert.json').read())
    #Extract Variables (Add as required)
    try:
        module_name = esa_alert["module_name"]
    except KeyError:
        module_name = "null"
    try:
         to_addr = esa_alert["events"][0]["user_emails"][0]["email"]
    except KeyError:
         missing_msg = "ATTN:Unable to retrieve from enrich table"
         to_addr = "<address_to_send_to_when_enrichment_fails>"
    try:
        device_host = esa_alert["events"][0]["device_host"]
    except KeyError:
        device_host = "null"
    try:
        service_name = esa_alert["events"][0]["service_name"]
    except KeyError:
        host_dst = "null"
    try:
        user_dst = esa_alert["events"][0]["user_dst"]
    except KeyError:
        user_dst = "null"
    # Sends Email
    smtp = SMTP()
    smtp.set_debuglevel(0)
    smtp.connect(smtp_server,smtp_port)

    date = datetime.datetime.now().strftime( "%m/%d/%Y %H:%M" ) + " GMT"
    subj = "Login Attempt on " + ( device_host )
    message_text = ("Alert Name: \t\t%s\n" % ( module_name ) +
        " \t\t%s\n" % ( missing_msg ) +
        "Date/Time : \t%s\n" % ( date  )  +
        "Host: \t%s\n" % ( device_host ) +
        "Service: \t%s\n" % ( service_name ) +
        "User: \t%s\n" % ( user_dst )
    )

    msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s\n" % ( from_addr, to_addr, subj, date, message_text )
    # "smtp.login()" is necessary if your
    # SMTP server requires authentication
    #smtp.login(smtp_user,smtp_pass)
    smtp.sendmail(from_addr, to_addr, msg)
    smtp.quit()

if __name__ == "__main__":
    dispatch(json.loads(sys.argv[1]))
    read()
    sys.exit(0)

 

And the result, after adding the script as a notification option within the ESA alert:

-----------------------------

 

Of course, all of this can and should be modified to include whatever information you might want/need for your use case.

Amazon Virtual Private Clouds (VPC) are used in hybrid cloud enterprise environments to securely host certain workloads and customers need to enable their SOC to identify potential threats with these components of their infrastructure.  The RSA NetWitness Platform supports ingest of many 3rd party sources,  including Amazon CloudTrail, GuardDuty, and now VPC Flow Logs.

 

The RSA NetWitness Platform has reporting content for Analysts to leverage in assessing the VPC security and overall health.  In https://community.rsa.com/docs/DOC-97451 we illustrate out-of-the-box reporting content to allow an analyst to get quick visibility into potential operational issues, such as highest and lowest accepted/rejected connections and traffic patterns on each VPC. 

 

VPC Flow Logs is an AWS monitoring feature that captures information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination. 

 

Logs from Amazon VPCs can be exported to CloudWatch. The RSA NetWitness Platform AWS VPC plugin uses CloudWatch API to capture the logs.

 

 

 

 

                                                                                                                                   

This project is an attempt at building a method of orchestrating threat hunting queries and tasks within RSA NetWitness Orchestrator (NWO).  The concept is to start with a hunting model defining a set of hunting steps (represented in JSON), have NWO ingest the model and make all of the appropriate "look ahead" queries into NetWitness, organizing results, adding context and automatically refining large result sets before distributing hunting tasks among analysts.  The overall goal is to have NWO keep track of all hunting activities, provide a platform for threat hunting metrics, and (most importantly) to offload as much of the tedious and repetitive data querying, refining, and management that is typical of threat hunting from the analyst as possible.  

 

Please leave a comment if you'd like some help getting everything going, or have a specific question.  I'd love to hear all types of feedback and suggestions to make this thing useful.

 

Usage

The primary dashboard shows the results of the most recent Hunting playbook run, which essentially contains each hunting task on it's own row in the table, a link to the dynamically generated Investigation task associated with that task, a count of the look ahead query results, and multiple links to the data in (in this case) RSA NetWitness.  The analyst has not been involved up until now, as NWO did all of this work in the background. 

 

Primary Hunting Dashboard

 

Pre-built Pivot into Toolset (NetWitness here)

 

The automations will also try to add extra context if the result set is Rare, has Possible Beaconing, contains Indicators, and a few other such "influencers" along with extra links to just those subsets of the overall results for that task.  Influencers are special logic embedded within the automations to help extract additional insight so that the analyst doesn't have to.  There hasn't been too much thought put into the logic behind these pieces just yet, so please consider them all proofs of concept and/or placeholders and definitely share any ideas or improvements you may have.  The automations will also try to pare down large result sets if you have defined thresholds within the hunting model JSON. The entire result set will still be reachable, but you'll get secondary counts/links where the system has tried to aggregate the rarest N results based on the "Refine by" logic also defined in the model, eg:

 

If defined in the huntingcontent.json, a specific query/task can be given a threshold and will try to refine results by rarity if the threshold is hit.  Example above shows a raw count of 6850 results, but a refined set of 35 results mapping to the rarest 20 {ip.dst, org.dst} tuples seen.

 

For each task, the assigned owner can drill directly into the relevant NetWitness data, or can drill into the Investigation associated with the task.  Right now the investigation playbooks for each task are void of any special playbooks themselves - they simply serve as a way to organize tasks, contain findings for each hunt task and a place from which to spawn child incidents if anything is found:

 

 

From here it is currently just up to the analyst to create notes, complete the task, or generate child investigations.  Future versions will do more with these sub investigation/hunt task playbooks to help the analyst. For now it's just a generic "Perform the Hunt" manual task.  Note that when these hunt task investigations get closed, the Hunt Master will updated the hunting table and mark that item as "complete", signified by a green dot and a cross-through as shown in the first screenshot.

 

How It Works

Playbook Logic

  1. Scheduled job or ad-hoc creation of "Hunt" incident that drives the primary logic and acts as the "Hunt Master"
  2. Retrieve hunting model JSON (content file and model definition file) from a web server somewhere
  3. Load hunting model, perform "look ahead", influencer, and refining queries
  4. Create hunting table based on query results, mark each task as "In Progress", "Complete", or "No Query Defined"
  5. Generate dynamic hunting investigations for each task that had at least 1 result from step 2
  6. Set a recurring task for the Hunt Master to continuously look for all related hunt tasks (they share a unique ID) and monitor their progress, updating the hunting table accordingly.
  7. [FUTURE] Continuously re-query the result sets in different ways to find outliers (eg. stacking different meta keys and adding new influencers/links to open hunting tasks

(Both the "Hunt Master" and the generated "Hunt Tasks" are created as incidents, tied together with a unique ID - while they could certainly be interacted with inside of the Incidents panel, the design is to have hunters operate from the hunting dashboard)

 

The Hunting Model

Everything is driven off of the hunting model & content.  The idea is to be able to implement any model/set of hunting tasks along with the queries that would normally get an analyst to the corresponding subset of data.  The example and templates given here corresponding with the RSA Network Hunting Labyrinth, modeled after the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide 

huntingcontent.json

This file must sit on a web server somewhere, accessible by the NWO server. You will later configure your huntingcontentmodel.json file to point to it's location if you want to manage you're own (instead of the default version found here: https://raw.githubusercontent.com/smennis/nwohunt/master/content/huntingcontent.json  

 

This file defines hunting tasks in each full branch of the JSON file, along with queries and other information to help NWO populate discover the data and organize the results:

 

 

(snippet of hunting content json)

 

The JSON file can have branches of length N, but the last element in any given branch, which defines a single hunting technique/task must have an element of the following structure. Note that "threshold" and "refineby" are optional, but "query" and "description" are mandatory, even if the values are blank.

 

 

The attached (same as the github link above as of the first release) example huntingcontent.json is meant to be a template as it is currently at the very beginning stages of being mapped to the RSA Network Hunting Labyrinth methodology.  This will be updated with higher resolution queries over time. Once we can see this operate in a real environment, the plan is to leverage a lot more of the ioc/eoc/boc and *.analysis keys in RSA NetWitness to take this beyond a simple proof of concept. You may also choose to completely define your own to get you started.

 

huntingcontentmodel.json

This file must sit on a web server somewhere, accessible by the NWO server. A public version is available here: https://raw.githubusercontent.com/smennis/nwohunt/master/content/huntingcontentmodel.json  but you will have to clone and update this to include references to your NWO and NW servers before it will work.  This serves as the configuration file specific to your environment that describes structure of the huntingcontent.json file, display options, icon options,  language, resource locations, and a few other configurations. It was done this way to avoid hard coding anything into the actual playbooks and automations:

 

model: Defines the heading for each level of huntingcontent.json.  A "-" in front means it will still be looked for programatically but will not be displayed in the table.

 

language: This is a basic attempt at making the hunting tasks described by the model more human readable by connecting each level of the json with a connector word.  Again, a "-" in front of the value means it will not be displayed in the table.


groupingdepth:
This tells NWO how many levels to go when grouping the tasks into separate tables. Eg. a grouping level of "0" would contain one large table with each full JSON branch in a row. A grouping level of "3" will create a separate table for each group defined 3 levels into the JSON (this is what's shown in the dashboard table above)

 

verbosity: 0 or 1 - a value of 1 means that an additional column will be added to the table with the entire "description" value displayed. When 0, you can still see the description information by hovering over the "hover for info" link in the table.

 

queryurl: This defines the base URL where the specific queries (in the 'query' element of huntingcontent.json) will be appended in order to drill into the data.  Example above is from my lab, so be sure to adjust this for your environment.

 

influencers: The set of influencers above are the ones that have been built into the logic so far.  This isn't as modular under the hood as it should be, but I think this is where there is a big opportunity for collaboration and innovation, and where some of the smarter & continuous data exploration will be governed.  iocs, criticalassets, blacklist, and whitelist are just additional queries the system will do to gain more insight and add the appropriate icon to the hunt item row.  rarity is not well implemented yet and just adds the icon when there are < N (10 in this case) results.  This will eventually be updated to look for rarity in the dataset against a specific entity (single IP, host, etc.) rather than the overall result count.  possiblebeacon is implemented to look for a 24 hour average of communication between two hosts signifying approximately 1 beacon per minute, 5 minutes, or 10 minutes along with a tolerance percentage.  Just experimenting with it at this point.   Note that the "weight" element doesn't affect anything just yet. The eventual concept is to build a scoring algorithm to help prioritize or add fidelity the individual hunting tasks.

   

Instructions

Installation Instructions:

  1. Prerequisites:  RSA NetWitness Network (Packets) and Logs (original version of NetWitness query integration) integration installed.  Note that there is currently a v2 NetWitness integration, but this will not work with that version at this time due to the change in how the commands work. I will try to update the automations for the v2 integration ASAP.
    1. The v1 NetWitness Integration is included in the zip.  Settings > Integrations > Import.
  2. Create a new incident type named "Hunt Item" (don't worry about mapping a playbook yet)
  3. Import Custom Fields (Settings > Advanced > Fields) - import incidentfields.json (ignore errors)
  4. Import Custom Layouts (Settings > Advanced > Layout Builder > Hunt)
    1. Incident Summary - import layout-details.json
    2. New/Edit - import layout-edit.json
    3. Incident Quick View - import layout-details.json
  5. Import Automations (Automations > Import - one by one, unfortunately)

       - GenerateHuntingIncidents

       - PopulateHuntingTable

       - GenerateHuntingIncidentNameID

       - LoadHuntingJSON

       - NetWitness LookAhead

       - ReturnRandomUser

       - UpdateHuntingStatus

  6. Import Dashboard Widget Automations (Automations > Import)

       - GetCurrentHuntMasterForWidget

       - GetHuntParticipants

       - GetHuntTableForWidget   

  7. Import Sub-Playbooks (Playbooks > Import)

       - Initialize Hunting Instance

       - Hunting Investigation Playbook

  8. Import Primary Playbook (Playbooks > Import)

    - 0105 Hunting

  9. Map "0105 Hunting" Playbook to "Hunt" Incident Type (Settings > Incident Types > Hunt) and set the playbook to automatically start

  10. Map "Hunting Investigation Playbook" to "Hunt Item" Incident Type and set playbook to automatically start

  11. Import Dashboard

  12. Place huntingcontent.json, huntingcontentmodel.json (within the www folder of the packaged zip), onto a web server somewhere, accessible by the NWO server. Note, by default the attached/downloadable huntingcontentmodel.json points at github for the huntingcontent.json file. You can leave this as is (and over time you'll get a more complete set of hunting queries) or create your own as you see fit and place it on your web server as well.

Configuration:

Before the first run, you'll have to make a few changes to point the logic at your own environment:

  1. Edit huntingcontentmodel.json and update all queryURL and icon URL fields to point at your NetWitness server and web server respectively.  You cal also edit the "huntingContent" element of this file (not shown) to point at your own version of the huntingcontent.json file discussed above:
    (Top - huntingcontentmodel.json snippet, showing the references with respect to your standard NetWitness UI URL)
  2. Go into the "Initialize Hunting Instance" playbook, and click on "Playbook Triggered" and enter the path to your huntoncontentmodel.json file (that includes updated fields pointing to NetWitness). If you leave it as is, none of the look ahead queries will work since no configuration file will be loaded.
  3. Creating your first hunting incident, from Incidents > New Incident, select type "Hunt" and give it a time range. Start with 1 day for testing.
  4. Note that the playbook will automatically name the incident "Hunt Master" prepended with a unique HuntID. Everything is working if, in the Incidents page, you see a single Hunt Master and associated Hunt Items all sharing the same HuntID.

Opening up the Hunt Master incident Summary page (or Hunting Dashboard) should show you the full hunting table:

 

Please add comments as you find bugs or have additional ideas and content to contribute.

We are extremely proud to announce that RSA has been positioned as a “Leader” by Gartner®, Inc. in the 2018 Magic Quadrant for Security Information and Event Management research report for its RSA NetWitness® Platform.

 

The RSA NetWitness Platform pulls together SIEM, network monitoring and analysis, endpoint threat detection, UEBA and orchestrated response capabilities into a single, evolved SIEM solution. Our significant investments in our platform over the past 18 months make us the go-to platform for security teams to rapidly detect and respond to threats across their entire environment.

 

The 2018 Gartner Magic Quadrant for SIEM evaluates 17 vendors on the basis of the completeness of their vision and ability to execute. The report provides an overview of each vendor’s SIEM offering, along with what Gartner sees as strengths and cautions for each vendor. The report also includes vendor selection tips, guidance on how to define requirements for SIEM deployments, and details on its rigorous inclusion, exclusion and evaluation criteria. Download the report and learn more about RSA NetWitness Platform.

If you've ever wondered what levers you have available to pull for creating application rule logic then this is your one stop shop for an explanation.

 

There's a fully documented cheat sheet of the parameters you can use in application rules, located at the link below:

Application Rules Cheat Sheet 

 

There are some commands that I personally wasn't aware of.  For example, using ~ instead of not() to negate the contains/begins/ends functions and I had forgotten about the ucount and unique operators that are available.

 

Also, v11.x introduced the ability to have metakeys on both the left and right side of operators (the table in that link explains which ones are available).

 

Overall, this is a good resource to bookmark if you are developing application rules in RSA NetWitness.

A recent customer question about alerting on Uptime values from the REST API got me digging into the Health and Wellness Policies for a better solution.

 

The request was to alert when the uptime value for specific device families was reset indicating that something had occured with the service and reset the uptime value.  Repeated resets of the uptime value could indicate an issue with the service that needed attention (core files created as a result of decoder service crashes was the root of this request).

 

Here is my solution:

  • Admin > Health and wellness > Policies
  • Select the + and add a new policy for the service that you want to monitor
  • In this case the Archiver service is our example

  • Add a new Rule
  • The conditions
    • Alarm = Regex match on .., .. seconds.*
    • REcovery = !Regex match on .., .. seconds.*

  • Save
  • Set your notification output at the bottom
  • save and enable the policy at the top

 

Now you have a policy that alerts when the uptime is within the first 60 seconds of restarting (.. is two digits so up to 60 seconds) and recovers once the uptime doesnt match the pattern (when 60 seconds switches to minute and seconds (61 seconds +)

 

Alarm

Recovery

 

 

Details on the pattern developed:

number of seconds followed by a comma then the friendly time breakdown of the seconds in years, months, weeks, days, hours, minutes and seconds.

.. = looked for 2 digits for the seconds (between 10-59 seconds after service restarted)

, .. = looked for the same seconds value after the comma

seconds.* = the word seconds and the trailing space in the value

when this pattern is matched (between 10-59 seconds after restart) there will be an alarm, then it will clear when that pattern is not matched (60 seconds +)

Eric Partington

Hunting in RDP Traffic

Posted by Eric Partington Employee Nov 12, 2018

I was just working in the NOC for HackFest 2018 in Quebec City (https://hackfest.ca/en/) and playing with RDP traffic to see who was potentially accessing remote systems on the network.  

 

This was inspired by this deck from Brocon and some recent enhancements to the RDP parser. (https://www.bro.org/brocon2015/slides/liburdi_hunting_rdp.pdf)

 

Recent enhancements to the RDP parser include extracting the screen resolutions, the username as well as the hostname, certificate and other details.

 

With some simple charting language we can create a number of rules that look for various properties of RDP traffic based on direction (Should you have RDP inbound from the internet?, should you have RDP outbound to the internet?) as well as volume based rules (which system has the most RDP session logins by unique username?, which system connects to the most systems by distinct count of ip?)

 

The report language is hosted here, simply import it into your Reporting Engine and point it at your packet broker/concentrators.

GitHub - epartington/rsa_nw_re_rdp: RDP summary reports for hunting/identification 

 

Please let me know if there are modifications to the Report that make it more useful to you.

 

Rules included in the report:

  • most frequent RDP hostnames
  • most frequent RDP keyboard languages
  • least frequent RDP keyboard languages
  • Outbound/Inbound/Lateral RDP traffic
  • Most frequent RDP screen resolutions
  • Most frequent RDP Usernames
  • Usernames by distinct destination IP
  • RDP Hosts with more than 1 username from them

A couple of clients have asked about a generic ESA template that can be used to alert into Arcsight for correlation with other sources.  After some testing and configuration this was the template that was created.  One thing that had us stuck for a short period of time was the timezone offset in the FreeMarker template to get Arcsight to read the time as UTC and apply the correct time offset.

 

Hopefully this helps others with this need.

 

<#include "macros.ftl"/>
CEF:0|RSA|NetWitness ESA|11.0|${moduleName}|${moduleName}|${severity}|<#list events as x>externalId=${x.sessionid!" "} proto=${x.ip_proto!" "} categoryOutcome=/Attempt categoryObject=Network categorySignificance=/Informational/Warning categoryBehavior=/Communicate host=<#if x.alias_host?has_content><@value_of x.alias_host /></#if> src=${x.ip_src!" "} spt=${x.tcp_srcport!" "} dhost=${x.host_dst!" "} dst=${x.ip_dst!" "} dpt=${x.tcp_dstport!" "} act=${x.action!" "} rt=${time?datetime?string(“MMM dd yyyy HH:mm:ss z”)} duser=${x.ad_username_dst!" "} suser=${x.ad_username_src!" "} filePath=${x.filename!" "} requestMethod=${x.action!" "} destinationDnsDomain=<#if x.alias_host?has_content><@value_of x.alias_host /></#if>  destinationServiceName=${x.service!" "}</#list> cs4=${moduleName} cs5=PROD cs6=MalwareCommunication

 

This CEF template is added to the Admin > System > Global Notifications > Templates tab and referenced in the ESA rules that need to alert out to Arcsight when they fire.

As cloud deployments continue to gain popularity you may find the need for running the RSA NetWitness Platform in Google Cloud.  The RSA NetWitness Platform is already available for AWS and Azure, however is not "officially" available in Google Cloud as of 11/2018.

 

In this blog post I will walk through how to get the RSA NetWitness Platform running in Google Cloud.  This is NOT officially supported, however it does work and has been deployed in the field.

 

The rough steps are:

 

  1. Install NetWitness to a local virtual machine using the DVD ISO (Use single file for vmdk rather than split)
  2. After startup edit /etc/grub/default
  3. Install ca-certificates via yum
  4. Add repo for Google and install a few more RPM's (https://cloud.google.com/compute/docs/instances/linux-guest-environment)
  5. Copy ISO to the VM (You can also use a Google storage bucket and gcfuse in place of this step)
  6. Install Google SDK on your local machine (https://cloud.google.com/compute/docs/gcloud-compute/)
  7. Upload vmdk from deployed machine to Google Cloud Storage bucket
  8. Run import tool (Importing Virtual Disks  |  Compute Engine Documentation  |  Google Cloud )
  9. (Skip this step if you copied ISO in step 5) Add gcfuse
  10. (Skip this step if you copied ISO in step 5) Use gcfuse to mount ISO
  11. Make a directory to mount the ISO
  12. Mount the ISO
  13. Remove existing ntp rpm (Skipping this step will cause bootstrap to fail)

 

  1. Use VMWare Workstation or vSphere to create a new virtual machine.  Follow sizing instructions here: Virtual Host Setup: Basic Deployment 
    1. Choose to install Operating System Later
    2. Adjust the VM to sizes needed
    3. Ensure you are using one file for the vmdk rather than splitting into multiple disks.  Converting split disks is not in scope for this blog
    4. For the CD/DVD ensure the option "Connected" is checked
    5. Select use ISO image and browse to the path of your 11.x DVD  ISO.  Please note there are both DVD and USB ISO's.  The instructions provided here used the DVD ISO.
    6. Finish and power on the Virtual Machine
    7. Follow the prompts to install NetWitness
  2. Google has very specific instructions on what kernel arguments are allowed for imported, bootable images.  More details here: Importing Boot Disk Images to Compute Engine  |  Compute Engine Documentation  |  Google Cloud 
    1. You'll want to change your Grub command line arguments to exclude any references to splash screens or quiet 
    2. For NetWitness 11.1 ISO I used the following for /etc/grub/default:
    3. GRUB_TIMEOUT=5

      GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"

      GRUB_DEFAULT=saved

      GRUB_DISABLE_SUBMENU=true

      GRUB_TERMINAL_OUTPUT="console"

      GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=netwitness_vg00/root rd.lvm.lv=netwitness_vg00/swap biosdevname=1 net.ifnames=0 rd.shell=0 console=ttys0,38400n8d"

      GRUB_DISABLE_RECOVERY="true"

  3. If DHCP did not automatically assign all network settings, assign gateway, ip and subnet in ifcfg file for the interface and ensure the machine has connectivity to the CentOS repos (https://www.cyberciti.biz/faq/howto-setting-rhel7-centos-7-static-ip-configuration/ )
  4. Run the following and accept any gpg keys if prompted.  The latest version of ca-certificates is required or the daisy converter service will fail when you run the import.
    1. yum install ca-certificates

  5. Add the Google yum repo
    1. vi  /etc/yum.repos.d/google-cloud.repo

    2. Paste contents below

      [google-cloud-compute]
      name=Google Cloud Compute
      baseurl=https://packages.cloud.google.com/yum/repos/google-cloud-compute-el7-x86_64
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
      https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

    3. Run command to clean up yum repos

      yum clean all

  6. Install Google Cloud helper rpm's.  Permanently accept any gpg keys so they are stored.  Also install any prerequisite rpm's.  This will prevent errors during the conversion.
    1. yum install python-google-compute-engine

      yum install google-compute-engine-oslogin

      yum install google-compute-engine

  7. Copy the 11.x (Same ISO you used to build) into /tmp via scp.  This will be used for mounting the local yum repo for bootstrap.  You can also use gcfuse in place of this step, however we will not cover that here.
  8. Shutdown the VM and copy the vmdk to Google Cloud Storage bucket accessible to account used with the Google Cloud SDK.  Instructions can be found here: https://cloud.google.com/compute/docs/gcloud-compute/
  9. Run the import tool (Importing Virtual Disks  |  Compute Engine Documentation  |  Google Cloud )
    1. If your vmdk was named nw11.vmdk and your storage bucket is called netwitness the import command would be:

      gcloud compute images import nw11 --source-file gs://netwitness/nw11.vmdk --os centos-7

    2. This process can take up to a few hours
    3. Once the conversion is complete you will now have an image you can use to make NetWitness VM's
  10. Start the VM, switch to user root and mount the ISO that was copied to the vmdk before the conversion. My ISO copied was 11.2 and named rsa-11.2.0.0.3274.el7-dvd.iso
    1. su root

      mkdir /mnt/nw11gce

      mount -t iso9660 -o /tmp/rsa-11.2.0.0.3274.el7-dvd.iso /mnt/nw11gce

  11. Uninstall ntp and install version on NetWitness ISO so bootstrap will successfully complete.  Google installs a newer version of ntp rpm.  The version NetWitness uses can be reinstalled from the ISO you just mounted in step 10
    1. yum remove ntp

      rpm -e ntpdate

      rpm -Uvh /mnt/nw11gce/Packages/11.2.0.0/OS/ntpdate*.rpm

  12. Run nwsetup-tui to complete the install

 

You should now have a working NetWitness image you can build from.  One thing I have noticed is during some upgrades of kernels (which are included in service packs, patches and major versions of NetWitness software updates) additional arguments are added that can cause the instance to lose ssh connectivity and the software to not function correctly.  After any upgrade and BEFORE reboot I recommend checking to ensure additional kernel arguments have not been added.  I'd also recommend upgrading in a lab or small instance as well as take snapshot prior to upgrade so you can return to a known good state if needed.

Hi Everyone,

The PDF compilations for RSA NetWitness Platform (Logs & Network) Version 11.2 are now available at the following link: RSA NetWitness Logs & Network 11.2.  This page is also accessible by navigating to the main RSA NetWitness Community and choosing Version 11.2 on the right hand side of the page.  

 

Once on that page, the links to the documents looks like this:

Filter Blog

By date: By tag: