Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

469 posts
Eric Partington

Hunting in RDP Traffic

Posted by Eric Partington Employee Nov 12, 2018

I was just working in the NOC for HackFest 2018 in Quebec City (https://hackfest.ca/en/) and playing with RDP traffic to see who was potentially accessing remote systems on the network.  

 

This was inspired by this deck from Brocon and some recent enhancements to the RDP parser. (https://www.bro.org/brocon2015/slides/liburdi_hunting_rdp.pdf)

 

Recent enhancements to the RDP parser include extracting the screen resolutions, the username as well as the hostname, certificate and other details.

 

With some simple charting language we can create a number of rules that look for various properties of RDP traffic based on direction (Should you have RDP inbound from the internet?, should you have RDP outbound to the internet?) as well as volume based rules (which system has the most RDP session logins by unique username?, which system connects to the most systems by distinct count of ip?)

 

The report language is hosted here, simply import it into your Reporting Engine and point it at your packet broker/concentrators.

GitHub - epartington/rsa_nw_re_rdp: RDP summary reports for hunting/identification 

 

Please let me know if there are modifications to the Report that make it more useful to you.

 

Rules included in the report:

  • most frequent RDP hostnames
  • most frequent RDP keyboard languages
  • least frequent RDP keyboard languages
  • Outbound/Inbound/Lateral RDP traffic
  • Most frequent RDP screen resolutions
  • Most frequent RDP Usernames
  • Usernames by distinct destination IP
  • RDP Hosts with more than 1 username from them

A couple of clients have asked about a generic ESA template that can be used to alert into Arcsight for correlation with other sources.  After some testing and configuration this was the template that was created.  One thing that had us stuck for a short period of time was the timezone offset in the FreeMarker template to get Arcsight to read the time as UTC and apply the correct time offset.

 

Hopefully this helps others with this need.

 

<#include "macros.ftl"/>
CEF:0|RSA|NetWitness ESA|11.0|${moduleName}|${moduleName}|${severity}|<#list events as x>externalId=${x.sessionid!" "} proto=${x.ip_proto!" "} categoryOutcome=/Attempt categoryObject=Network categorySignificance=/Informational/Warning categoryBehavior=/Communicate host=<#if x.alias_host?has_content><@value_of x.alias_host /></#if> src=${x.ip_src!" "} spt=${x.tcp_srcport!" "} dhost=${x.host_dst!" "} dst=${x.ip_dst!" "} dpt=${x.tcp_dstport!" "} act=${x.action!" "} rt=${time?datetime?string(“MMM dd yyyy HH:mm:ss z”)} duser=${x.ad_username_dst!" "} suser=${x.ad_username_src!" "} filePath=${x.filename!" "} requestMethod=${x.action!" "} destinationDnsDomain=<#if x.alias_host?has_content><@value_of x.alias_host /></#if>  destinationServiceName=${x.service!" "}</#list> cs4=${moduleName} cs5=PROD cs6=MalwareCommunication

 

This CEF template is added to the Admin > System > Global Notifications > Templates tab and referenced in the ESA rules that need to alert out to Arcsight when they fire.

As cloud deployments continue to gain popularity you may find the need for running the RSA NetWitness Platform in Google Cloud.  The RSA NetWitness Platform is already available for AWS and Azure, however is not "officially" available in Google Cloud as of 11/2018.

 

In this blog post I will walk through how to get the RSA NetWitness Platform running in Google Cloud.  This is NOT officially supported, however it does work and has been deployed in the field.

 

The rough steps are:

 

  1. Install NetWitness to a local virtual machine using the DVD ISO (Use single file for vmdk rather than split)
  2. After startup edit /etc/grub/default
  3. Install ca-certificates via yum
  4. Add repo for Google and install a few more RPM's (https://cloud.google.com/compute/docs/instances/linux-guest-environment)
  5. Copy ISO to the VM (You can also use a Google storage bucket and gcfuse in place of this step)
  6. Install Google SDK on your local machine (https://cloud.google.com/compute/docs/gcloud-compute/)
  7. Upload vmdk from deployed machine to Google Cloud Storage bucket
  8. Run import tool (Importing Virtual Disks  |  Compute Engine Documentation  |  Google Cloud )
  9. (Skip this step if you copied ISO in step 5) Add gcfuse
  10. (Skip this step if you copied ISO in step 5) Use gcfuse to mount ISO
  11. Make a directory to mount the ISO
  12. Mount the ISO
  13. Remove existing ntp rpm (Skipping this step will cause bootstrap to fail)

 

  1. Use VMWare Workstation or vSphere to create a new virtual machine.  Follow sizing instructions here: Virtual Host Setup: Basic Deployment 
    1. Choose to install Operating System Later
    2. Adjust the VM to sizes needed
    3. Ensure you are using one file for the vmdk rather than splitting into multiple disks.  Converting split disks is not in scope for this blog
    4. For the CD/DVD ensure the option "Connected" is checked
    5. Select use ISO image and browse to the path of your 11.x DVD  ISO.  Please note there are both DVD and USB ISO's.  The instructions provided here used the DVD ISO.
    6. Finish and power on the Virtual Machine
    7. Follow the prompts to install NetWitness
  2. Google has very specific instructions on what kernel arguments are allowed for imported, bootable images.  More details here: Importing Boot Disk Images to Compute Engine  |  Compute Engine Documentation  |  Google Cloud 
    1. You'll want to change your Grub command line arguments to exclude any references to splash screens or quiet 
    2. For NetWitness 11.1 ISO I used the following for /etc/grub/default:
    3. GRUB_TIMEOUT=5

      GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"

      GRUB_DEFAULT=saved

      GRUB_DISABLE_SUBMENU=true

      GRUB_TERMINAL_OUTPUT="console"

      GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=netwitness_vg00/root rd.lvm.lv=netwitness_vg00/swap biosdevname=1 net.ifnames=0 rd.shell=0 console=ttys0,38400n8d"

      GRUB_DISABLE_RECOVERY="true"

  3. If DHCP did not automatically assign all network settings, assign gateway, ip and subnet in ifcfg file for the interface and ensure the machine has connectivity to the CentOS repos (https://www.cyberciti.biz/faq/howto-setting-rhel7-centos-7-static-ip-configuration/ )
  4. Run the following and accept any gpg keys if prompted.  The latest version of ca-certificates is required or the daisy converter service will fail when you run the import.
    1. yum install ca-certificates

  5. Add the Google yum repo
    1. vi  /etc/yum.repos.d/google-cloud.repo

    2. Paste contents below

      [google-cloud-compute]
      name=Google Cloud Compute
      baseurl=https://packages.cloud.google.com/yum/repos/google-cloud-compute-el7-x86_64
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
      https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

    3. Run command to clean up yum repos

      yum clean all

  6. Install Google Cloud helper rpm's.  Permanently accept any gpg keys so they are stored.  Also install any prerequisite rpm's.  This will prevent errors during the conversion.
    1. yum install python-google-compute-engine

      yum install google-compute-engine-oslogin

      yum install google-compute-engine

  7. Copy the 11.x (Same ISO you used to build) into /tmp via scp.  This will be used for mounting the local yum repo for bootstrap.  You can also use gcfuse in place of this step, however we will not cover that here.
  8. Shutdown the VM and copy the vmdk to Google Cloud Storage bucket accessible to account used with the Google Cloud SDK.  Instructions can be found here: https://cloud.google.com/compute/docs/gcloud-compute/
  9. Run the import tool (Importing Virtual Disks  |  Compute Engine Documentation  |  Google Cloud )
    1. If your vmdk was named nw11.vmdk and your storage bucket is called netwitness the import command would be:

      gcloud compute images import nw11 --source-file gs://netwitness/nw11.vmdk --os centos-7

    2. This process can take up to a few hours
    3. Once the conversion is complete you will now have an image you can use to make NetWitness VM's
  10. Start the VM, switch to user root and mount the ISO that was copied to the vmdk before the conversion. My ISO copied was 11.2 and named rsa-11.2.0.0.3274.el7-dvd.iso
    1. su root

      mkdir /mnt/nw11gce

      mount -t iso9660 -o /tmp/rsa-11.2.0.0.3274.el7-dvd.iso /mnt/nw11gce

  11. Uninstall ntp and install version on NetWitness ISO so bootstrap will successfully complete.  Google installs a newer version of ntp rpm.  The version NetWitness uses can be reinstalled from the ISO you just mounted in step 10
    1. yum remove ntp

      rpm -e ntpdate

      rpm -Uvh /mnt/nw11gce/Packages/11.2.0.0/OS/ntpdate*.rpm

  12. Run nwsetup-tui to complete the install

 

You should now have a working NetWitness image you can build from.  One thing I have noticed is during some upgrades of kernels (which are included in service packs, patches and major versions of NetWitness software updates) additional arguments are added that can cause the instance to lose ssh connectivity and the software to not function correctly.  After any upgrade and BEFORE reboot I recommend checking to ensure additional kernel arguments have not been added.  I'd also recommend upgrading in a lab or small instance as well as take snapshot prior to upgrade so you can return to a known good state if needed.

Hi Everyone,

The PDF compilations for RSA NetWitness Platform (Logs & Network) Version 11.2 are now available at the following link: RSA NetWitness Logs & Network 11.2.  This page is also accessible by navigating to the main RSA NetWitness Community and choosing Version 11.2 on the right hand side of the page.  

 

Once on that page, the links to the documents looks like this:

Localized documents that were updated for Version 11.1 are posted in RSA Link for customers who speak Japanese, Spanish, German, and French. These are the locations.

I was recently working with Eric Partington who asked if we could get the Autonomous System Numbers from a recent update to GEOIP.  I believe at one point this was a feed, but had been deprecated.  After a little bit of research, I learned that an update had been made to the Lua libraries that allowed for the calling of a new api function named geoipLookup that would give us this information as well as some other information that might be of interest.  A few years ago, I painstakingly created a feed for my own use to map countries to continents.  I wish I had this function call back then.

 

The api call is as follows:

 

geoipLookup

-- Examples:
-- local continent = self:geoipLookup(ip, "continent", "names", "en") -- string
-- local country = self:geoipLookup(ip, "country", "names", "en") -- string
-- local country_iso = self:geoipLookup(ip, "country", "iso_code") -- string "US"
-- local city = self:geoipLookup(ip, "city", "names", "en") -- string
-- local lat = self:geoipLookup(ip, "location", "latitude") -- number
-- local long = self:geoipLookup(ip, "location", "longitude") -- number
-- local tz = self:geoipLookup(ip, "location", "time_zone") -- string "America/Chicago"
-- local metro = self:geoipLookup(ip, "location", "metro_code") -- integer
-- local postal = self:geoipLookup(ip, "postal", "code") -- string "77478"
-- local reg_country = self:geoipLookup(ip, "registered_country", "names", "en") -- string "United States"
-- local subdivision = self:geoipLookup(ip, "subdivisions", "names", "en") -- string "Texas"
-- local isp = self:geoipLookup(ip, "isp") -- string "Intermedia.net"
-- local org = self:geoipLookup(ip, "organization") -- string "Intermedia.net"
-- local domain = self:geoipLookup(ip, "domain") -- string "intermedia.net"
-- local asn = self:geoipLookup(ip, "autonomous_system_number") -- uint32 16406
function parser:geoipLookup(ipValue, category, [name], [language]) end

 

As you know, we already get many of these fields already.  Meta keys such as country.src, country.dst, org.src, and org.dst are probably well known to many analysts and used for various queries.  Eric had asked for 'asn' and because I tried it previously with a feed, I wanted to include 'continent' as well.  

 

So....I created a Lua parser to get this for me.  My tokens were meta callbacks for ip.src and ip.dst.

 

[nwlanguagekey.create("ip.src", nwtypes.IPv4)] = lua_geoip_extras.OnHostSrc,
[nwlanguagekey.create("ip.dst", nwtypes.IPv4)] = lua_geoip_extras.OnHostDst,

 

My intent is to build this parser to work on both packet and log decoders.  I had originally wanted to use another function call, but found this was not working properly on log decoders.  However, the meta callbacks of ip.src and ip.dst did work.  Now, with this in mind, I could leverage this parser on both packet and log decoders. :-)

 

The meta keys I was going to write into were as follows:

 

nwlanguagekey.create("asn.src", nwtypes.Text),
nwlanguagekey.create("asn.dst", nwtypes.Text),
nwlanguagekey.create("continent.src", nwtypes.Text),
nwlanguagekey.create("continent.dst", nwtypes.Text),

 

Since I was using ip.src and ip.dst meta, I wanted to apply the same source and destination meta for my asn and continent values.  

 

Then, I just wrote out my functions:

 

-- Get ASN and Continent information from ip.src and ip.dst
function lua_geoip_extras:OnHostSrc(index, src)
   local asnsrc = self:geoipLookup(src, "autonomous_system_number")
   local continentsrc = self:geoipLookup(src, "continent", "names", "en")

   if asnsrc then
      --nw.logInfo("*** ASN SOURCE: AS" .. asnsrc .. " ***")
      nw.createMeta(self.keys["asn.src"], "AS" .. asnsrc)
   end
   if continentsrc then
      --nw.logInfo("*** CONTINENT SOURCE: " .. continentsrc .. " ***")
      nw.createMeta(self.keys["continent.src"], continentsrc )
     end
end

 

function lua_geoip_extras:OnHostDst(index, dst)
   local asndst = self:geoipLookup(dst, "autonomous_system_number")
   local continentdst = self:geoipLookup(dst, "continent", "names", "en")

 

   if asndst then
      --nw.logInfo("*** ASN DESTINATION: AS" .. asndst .. " ***")
      nw.createMeta(self.keys["asn.dst"], "AS" .. asndst)
   end
   if continentdst then
      --nw.logInfo("*** CONTINENT DESTINATION " .. continentdst.. " ***")
      nw.createMeta(self.keys["continent.dst"], continentdst)
   end
end

 

This was my first time using this new api call and my mind was racing with ideas on how else I could use this capability.  The one that immediately came to mind was enriching meta when X-Forwarded-For or Client-IP meta existed.  If it did exist, it should be parsed into a meta key called "orig_ip" today or "ip.orig" in the future.  The meta key "orig_ip" is formatted as Text so I need to account for that by determining the correct HostType.  We don't want to pass a domain name when we are expecting to pass an IP address.  I can do that by importing the functions from 'nwll'.

 

In the past, the only meta that could be enriched by GEOIP was ip.src and ip.dst (I have not tested ipv6.src or ipv6.dst).  Now with this API call, I can apply the content of GEOIP to other IP address related meta keys.  I have attached the full parser to this post.  

 

Hope this helps others out there in the community and as always, happy hunting.

 

Chris

Background Information:

  • v10.6.x had a method in the UI to add a standalone NW head server for investigation purposes (and to help with DR scenarios) using legacy authentication (static local credentials).  
  • v11.x appeared to have removed that capability which was blocking some of the larger upgrades, however it appears that the capability actually exists; it is just not presented in the UI as it was in v10.6.
  • Having a DR investigation server also helps to provide continuous access to data for analysts during the major upgrade from v10.6.x to v11.2 which is incredibly beneficial to have.

 

Review the upgrade guide and the "Mixed Mode" notes at the link below for more details on the upgrade and running in mixed mode:

https://community.rsa.com/community/products/netwitness/blog/2018/10/18/running-rsa-netwitness-mixed-mode

 

If you spin up a DR v11.2 standalone NW server from the ISO/OVA you can connect it to an existing set of concentrators using local credentials (Note: DO NOT expect that Live or ESA will function as they do on the actual node0 NW server.  This method gets you a window into the meta for investigation, reporting and Dashboards only!)

 

Here's the steps you'll need to follow once you have your DR v11.2 NW server spun up:

 

Create local credentials to use for authentication with the concentrator(s) or broker(s) that you will connect to under

Admin > Service > <service> > Security

 

 

You will need to add some permissions to the aggregation role to allow the Event Analysis function to work:

Replicate the role and user to the other services that you will need to authenticate to.

 

Your 11.2 DR investigation head server can connect to a 10.6.6 Broker or Concentrator with the following:

 

Broker service > Explore

Select broker

Right click select properties

Select add from the drop down

Add the concentrators that need to be connected (as they were in 10.6).  Below are the ports that are required for the connection:

  • 50005 for Concentrators
  • 56005 for SSL to Concentrators
  • 50003 to Broker 
  • 56003 for SSL to Broker

 

device=<ip>:<port> username=<> password=<>

 

Click send.

 

You should get a successful connection and in the config section you will now see the aggregation connection setup:

 

Click Start aggregation and make sure Aggregate Autostart is checked:

 

Using this DR Investigation server you can use the following process to help in upgrading from v10.6.6 to v11.2+ in the following steps:

 

Initial State:

 

Upgrade the new Investigation Head:

 

Investigators now can use the 11.2 head to investigate without interruption during the production NW head server upgrade.

 

Upgrade the primary (node0) NW head server and ESA:

Upgrade the decoder/concentrator pairs:

Note: an outage will occur here for investigation as the stacks are upgraded

Now you'll be running in v11.2 mode as you were in 10.6 with DR investigation head server so that your Investigation and Events views will be accessible.

This post details some of the implications of running in a mixed-mode environment. For the purposes of this post, a mixed-mode environment is one in which some services are running on Security Analytics 10.6.x, and others are running on NetWitness 11.x.

 

Note: RSA strongly suggests upgrading your 10.x services to 11.x to match your NetWitness server version, but running in Mixed-Mode allows you to stage your upgrade, especially for larger environments.

 

If you run in a mixed-mode environment for an extended time, you may see or experience some or all of the following behaviors:

Overall Administration and Management Functionality

  • If you add any 10.6.x hosts, you must add them manually to the v11.x architecture.
    • There is no automatic discover, or trust establishment via certificates.
    • You need to manually add them through username and password.
  • In 11.x, a secondary or alternate NetWitness (NW) Server is not currently supported, though this may change for future NetWitness versions.
    • Only the Primary NW Server could be upgraded (which would become "Node0").
    • Secondary NW Servers could be re-purposed to other host types.
  • The Event Analysis View is not available at all in mixed mode, and will not work until ALL devices are upgraded to 11.x.

Mixed Brokers

If you do not upgrade all of your Brokers, the existing Navigate and Event Grid view will still be available.

Implications for ESA

If you follow the recommended upgrade procedure for ESA services, note the following:

  • During the ESA upgrade, the following mongo collections are moved from the ESA mongodb to the NW Server mongodb:
    • im/aggregation_rule.*
    • im/categories
    • im/ tracking_id_sequence
    • context-wds/* // all collections
    • datascience/* // all collections
  • The upgrade process performs some reformatting of the data: so make sure to follow those procedures as described in the Physical Host Upgrade Guide and Physical Host Upgrade Checklist documents, available on RSA Link. One way to find these documents is to open the Master Table of Contents, where links are listed in the Installation and Upgrade section.

 

IMPORTANT!You MUST upgrade your ESA services at the same time you upgrade the NetWitness Server. If you do not, you will have to re-image all of the ESA services as new, and thus lose all of your data. Also, if you do not plan on updating your ESA services, you would need to REMOVE them from the 10.6.x Security Analytics Server before you start your upgrade

Hosts/Services that Remain on 10.6.x

  • If you add a 10.6.x host after you upgrade to 11.x, no configuration management is available through the NetWitness UI. You must use the REST API for this. Existing 10.6.x devices will be connected and manageable via 11.x -- as long as you do not remove any aggregation links.
  • You need to aggregate from 10.6.x hosts to 11.x hosts manually.
    • For example, for a Decoder on 10.6.x and a Concentrator on 11.x:
    • Same applies for any other 11.x service that is aggregating from a 10.6.x host.
  • If you have a secondary Security Analytics Server, RSA recommends that you keep it online to manage any hosts or services that still are running 10.6.x, until you have upgraded them all to 11.x. 

Hybrids

If you are doing an upgrade on a system that has hybrids, the communication with the hybrids will still be functional. The Puppet CA cert is used to as the cert for the upgraded 11.x system, so the trust is still in place.

For example, if you have a system with a Security Analytics or NetWitness Server, an ESA service, and several hybrids, you can upgrade the NW Server and the ESA service, and communications with the hybrids will still work.

Recommended Path Away from Mixed-Mode

For large installations, you can upgrade services in phases. RSA recommends working "downstream." For example:

  1. For the initial phase (phase 1), upgrade the NW Server, ESA and Malware services. Also, upgrade at least the top-level Broker. If you have multiple Brokers, the suggestion is to upgrade all of them in phase 1.
  2. For phase 2, upgrade your concentrators, decoders, and so forth. The suggestion is to upgrade the concentrators and decoders in pairs, so they can continue communicating correctly with each other.

MuddyWater is an APT group who's targets have mainly been in the Middle East, such as the Kingdom of Saudi Arabia, the United Arab Emirates, Jordan, Iraq ... with a focus on oil, military, telco and government entities.

 

The group is using Spear Phishing attacks as an initial vector. The email contains an attached word document which tries to trick the user into enabling macros. The attachment's filename and its content are usually tailored towards the target, such as the language used.

 

In the below example, we will look at the behavior of the following malware sample:

SHA-256: bfb4fc96c1ba657107c7c60845f6ab720634c8a9214943b5221378a37a8916cd

MD5: 16ac1a2c1e1c3b49e1a3a48fb71cc74f

 

Filetype: MS Word Document

 

 

Endpoint Behavior

This specific malware sample is for an Arabic speaking victim targeted at Jordan, where the filename "معلومات هامة.doc" can translate into "important information.doc". Other variants contain content in Turkish, Pakistani ...

 

The file shows blurry text in Arabic, with a message telling the target to enable content (and therefore macros) to unlock the content of the document.

 

Once the user clicks on "Enable Content", we're able to see the following behaviors on RSA NetWitness Endpoint.

 

1- The user opens the file. In this case, the file was opened from the Desktop folder, but if it was from his email, it would have shown from "outlook.exe" instead of "explorer.exe"

 

2- The malware uses "rundll32.exe" to execute the dropped file (C:\ProgranData\EventManager.log), allowing to evade detection

 

3- Powershell is then used to decode the payload of another dropped file ("C:\ProgramData\WindowsDefenderService.ini") and executes it. Having the full arguments of the Powershell command, it would be possible for the analyst to use it to decode the content of the "WindowsDefenderService.ini" file for further analysis

 

4- Powershell modifies the "Run" Registry key to run the payload at startup

 

5- Scheduled tasks are also created 

 

 

After this, the malware will continue execution after a restart (this might be as a layer of protection against sandboxes).

 

6- The infected machine is restarted

 

7- an additional powershell script "a.ps1" is dropped

 

8- Some of the Windows security settings are disabled (such as Windows Firewall, Antivirus, ...)

 

 

 

By looking at the network activity on the endpoint, we can see that powershell has generated a number of connections to multiple domains and IPs (possible C2 domains).

 

 

Network Behavior

To look into the network part in more details, we can leverage the captured network traffic on RSA NetWitness Network.

 

We can see, on RSA NetWitness Network, the communication from the infected machine (192.168.1.128) to multiple domains and IP addresses over HTTP that match what has been originating from powershell on RSA NetWitness Endpoint.

We can also see that most of the traffic is targeting "db-config-ini.php". From this, it seems that the attacker has compromised different legitimate websites, and the "db-config-ini.php" file is owned by the attacker.

 

Having the full payload of the session on RSA NetWitness network, we can reconstruct the session to confirm that it does in fact look like beaconing activity to a C2 server.

 

 

Even though the websites used might be legitimate (but compromised), we can still see suspicious indicators, such as:

  • POST request without a GET
  • Missing Headers
  • Suspicious / No User-Agent
  • High number of 404 Errors
  • ...

 

 

 

Conclusions

We can see how the attacker is using legitimate, trusted, and possibly white-listed modules, such as powershell and rundll32, to evade detection. The attacker is also using common file names for the dropped files and scripts, such as "EventManager" and "WindowsDefenderService" to avoid suspicion from analysts.

 

As shown in the below screenshot, even though "WmiPrvSE.exe" is a legitimate Microsoft files (it has a valid Microsot signature, as well as a known trusted hash value), but due to its behavioral activity (as shown in the Instant IOC section), we're able to assign a high behavioral score of 386. It should also be noted that any of the suspicious IIOCs that have been detected could trigger a real time alert over Syslog or E-Mail for early detection, even though the attacker is using advanced techniques to avoid detection.

 

 

 

Similarly, on the network, even though the attacker is leveraging (compromised) legitimate sites, and using standard known protocols (HTTP) and encrypted payloads, to avoid detection and suspicion, it is still possible to detect those suspicious behaviors using RSA NetWitness Network, and look for indicators such as Post no Get, suspicious user agents, missing headers, or other anomalies.

 

 

 

 

Indicators

The following are IOCs that can be used to look if activity from this APT currently exists in your environment.

This list is not exhaustive and is only based on what has been seen during this test.

 

Malware Hash

SHA-256: bfb4fc96c1ba657107c7c60845f6ab720634c8a9214943b5221378a37a8916cd

MD5: 16ac1a2c1e1c3b49e1a3a48fb71cc74f

 

Domains

  • wegallop.com
  • apidubai.ae
  • hmholdings360.co.za
  • alaqaba.com
  • triconfabrication.com
  • themotoringcalendar.co.za
  • nakoserum.com
  • mediaology.com.pk
  • goolineb2b.com
  • addorg.org
  • mumtazandbrohi.com
  • pmdpk.com
  • buy4you.pk
  • gcmbdin.edu.pk
  • mycogentrading.com
  • ipripak,org
  • botanikbahcesi.com
  • dailysportsgossips.com
  • ambiances-toiles.fr
  • britishofficefitout.com
  • canbeginsaat.com

 

IP Addresses

  • 195.229.192.139
  • 185.56.88.14
  • 196.40.100.202
  • 45.33.114.180
  • 173.212.229.48
  • 54.243.123.39
  • 196.41.137.185
  • 209.99.40.223
  • 192.185.166.227
  • 89.107.58.132
  • 86.107.58.132
  • 192.185.166.225
  • 192.185.75.15
  • 94.130.116.248
  • 192.169.82.62
  • 86.96.202.165
  • 196.40.100.204
  • 192.185.166.22
  • 5.250.241.18
  • 104.18.54.26
  • 217.160.0.2
  • 192.185.24.71
  • 185.82.222.239

RSA Netwitness gives you the ability to use remote Virtual Log Collectors (VLC) to be able to reduce your footprint and reduce the amount of ports required. RSA Netwitness can leverage different mechanisms to retrieve (Pull) or send (Push) the log from or to a log collector.

 

Multiple customers and RSA partners will use the VLC to be able to send the logs from a remote location to a cloud or centralized infrastructure behind one or multiple firewalls in an isolated network. In an isolated network, the VLC won't have any route to this central location and the following article will help you configure your platform properly.

 

Before deploying your VLC, verify that the host configuration for your head unit is set to nw-node-zero :

 

When this is done, deploy your VLC in your virtual infrastructure and launch the nwsetup-tui to continue the installation.  When the setup asks you for the IP of the Node Zero enter the external IP of your head unit. For example, in an isolated network a firewall will control any communication to the isolated network:

 

(192.168.0.x) LAN Corpo --> Firewall Wan Interface (192.168.0.100) --> Firewall Lan interface (Isolated Network 10.60.130.1) --> Netwitness Head unit (10.60.130.100)

 

NOTE: You need to open the required ports for this installation in your firewall. You can refer to the official documentation related to network/port requirements at the following link : Deployment: Network Architecture and Ports 

 

In this example, the Node Zero external IP will be 192.168.0.100 and when completing the setup, make sure you are using the external Node Zero IP (Firewall WAN Interface for this isolated network).

 

When this is done, launch the install process on the VLC and after several minutes the VLC will be up and running:

 

Next, we need to configure the VLC to send the logs to the log decoder behind the Firewall:

 

During this process, the operation will work but the IP will be the internal IP of the log decoder and we need to change this information to re-establish the communication. 

 

We need to modify the shovel.conf file to be able to send our logs to the log decoder using the same process for this isolated network. To facilitate the process you can add another IP to your firewall and configure a one to one NAT for your log decoder. For this example, we have a one to one NAT for the log decoder using the following IP (192.168.0.101) on the external interface of the firewall.

 

The shovel_confing file is located on the VLC at the following path:

/etc/rabbitmq

 

Connect to your VLC using SSH and edit the file and change the IP to the external IP of your Firewall for your isolated network:

 

When this is completed reboot your VLC and in the RSA Netwitness UI you will have the green dot confirming that the communication is working:

 

Context menu actions have long been a part of the RSA NetWitness Platform. v11.2 brought a few nice touches to help manage the menu items as well extend the functions into more areas of the product.

 

See here for previous information on the External Lookup options:

Context Menus - OOTB Options 

 

And these for Custom Additions that are useful to Analysts:

Context Menu - Microsoft EventID 

Context Menu - VirusTotal Hash Lookup 

Context Menu - RSA NW to Splunk 

Context Menu - Investigate IP from DNS 

Context Menu - Cymon.io 

 

As always access to the administration location is located here:

Admin > System > Context Menu Actions

 

The first thing you will notice is there is a bit of a different look since a good bit of cleanup has been done in the UI.

 

Before we start trimming the menu items... here is what it looks before the changes:

Data Science/Scan for Malware/Live Lookup are all candidates for reduction.

 

When you open an existing action or create a new one you will also see some new improvements.

No longer just a large block of text that can be edited if you know what and where to change but a set of options to change to implement your custom action (or tweak existing ones)

 

You can switch to the advanced view to get back to the old freeform world if you want to.

 

Clean up

To clean up the menu for your analysts you might consider disabling these items if you don't have a warehouse from RSA installed

Sort by Group Name, Locate the Data Science group and disable all the rules for them (4)

Disable any of the External lookup items that are not used or not important for your analysts

Scan for Malware - are you logs only? Malware not needed, are you packets or endpoint but don't use Malware?

Live Lookup - mostly doesn't provide value to analysts

Now you should have a nice clean right click action menu available to investigators to do their job better and faster.

The RSA NetWitness Platform has multiple new enhancements as to how it handles Lists and Feeds in v11.x.  One of the enhancements introduced in the v11.1 release was the ability to use Context Hub Lists as Blacklist and/or Whitelist enrichment sources in ESA alerts.  This feature allows analysts and administrators a much easier path to tuning and updating ESA alerts than was previously available.

 

In this post, I'll be explaining how you can take that one step further and create ESA alerts that automatically update Context Hub Lists that can in turn be used as blacklist/whitelist enrichment sources in other ESA alerts.  The capabilities you'll use to accomplish this will be the ESA's script notifications, the ESA's Enrichment Sources and the Context Hub's List Data Source.

 

Your first step is to determine what kind of data you want to put into the Context Hub List.  For my test case I chose source and destination IP addresses.  Your next step is to determine where this List should live so that the Context Hub can access it.  The Context Hub can pull Lists either via HTTP, HTTPS, or from its local file system on the ESA appliance - for my test case I chose the local filesystem.

 

With that decided, your next step is to create the file that will become the List - the Context Hub looks within the /var/netwitness/contexthub-server/data directory on the ESA, so you'll create a CSV file in this location and add headers to help you (and others) know what data the List contains:

 

**NOTE** Be sure to make this CSV writeable for all users, e.g.:

# chmod 666 esaList.csv

 

Next, add this CSV to the CH as a Data Source.  In Admin / Services / Contexthub Server / Config --> Data Sources, choose List:

 

Select "Local File Store," then give your List a name and description and choose the CSV from the dropdown:

 

If you created headers in the CSV, select "With Column Headers" and then validate that the Context Hub can see and read your file.  After validation is successful, tell the Context Hub what types of meta are in each column, whether to Append to or Overwrite values in the List when it updates, and also whether to automatically expire (delete) values once they reach a certain age (maximum value here is 30 days):

 

For my test case, I chose not to map the date_added and source_alert columns from the CSV to any meta keys, because I only want them for my own awareness to know where each value came from (i.e.: what ESA alert) and when it was added.  Also, I chose to Append new values rather than Overwrite, because the Context Hub List has built in functionality that identifies new and unique values within the CSV and adds only those to the List.  Append will also enable the List Value Expiration feature to automatically remove old values.

 

Once you have selected your options, save your settings to close the wizard.  Before moving on, there are a few additional configuration options to point out which are accessible through the gear icon on the right side of the page.  These settings will allow you to modify the existing meta mapping or add new ones, adjust the Expiration, enable or disable whether the List's values are loaded into cache, and most importantly - the List's update schedule, or Recurrence:

 

**NOTE** At the time of this writing, the Schedule Recurrence has a bug that causes the Context Hub to ignore any user-defined schedule, which means it will revert to the default setting and only automatically update every 12 hours.

 

With the Context Hub List created, you can move on to the script and notification template that you will use to auto-update the CSV (both are attached to this blog - you can upload/import them as is, or feel free to modify them however you like for your use cases / environment).  You can refer to the documentation (System Configuration Guide for Version 11.x - Table of Contents) to add notification outputs, servers, and templates.

 

To test that all of this works and writes what you want to the CSV file (for my test case, IP source and destination values), create an ESA alert that will fire with the data points you want capture, and then add the script notification, server, and template to the alert:

 

After deploying your alert and generating the traffic (or waiting) for it to fire, verify that your CSV auto-updates with the values from the alert by keeping an eye on the CSV file.  Additionally, you can force your Context Hub List to update by re-opening your List's settings (the gear icon mentioned above), re-saving your existing settings, and then checking its values within the Lists tab:

 

 

You'll notice that in my test case, my CSV file has 5 entries in it while my Context Hub List only has 3 - this is a result of the automatic de-duplication mentioned above; the List is only going to be Appending new and unique entries from the CSV.

 

Next up, add this List as an Enrichment Source to your ESA.  Navigate to Configure / ESA Rules --> Setting tab / Enrichment Sources, and add a new Context Hub source:

 

In the wizard, select the List you created at the start of this process and the columns that you will want to use within ESA alerts:

 

With that complete, save and exit the wizard, and then move on to the last step - creating or modifying an ESA alert to use this Context Hub List as a whitelist or blacklist.

 

Unless your ESA alert requires advanced logic and functionality, you can use the ESA Rule Builder to create the alert.  Within your alert statement, build out the alert logic and add a Meta Whitelist or Meta Blacklist Condition, depending on your use case:

 

Select the Context Hub List you just added as an Enrichment Source:

 

Select the column from the Context Hub List that you want to match against within your alert:

 

Lastly, select the NetWitness meta key that you want to match against it:

 

You can add additional Statements and additional blacklists or whitelists to your alert as your use case dictates.  Once complete, save and deploy your alert, and then verify that your alerts are firing as expected:

 

And finally, give yourself a pat on the back.

For those who are interested in becoming certified on the RSA NetWitness Platform - we have some great news for you!  This process just became a whole lot easier... you no longer have to travel to a Pearson VUE testing center to take the certification exams.  All four of the RSA NetWitness certifications can now be taken through online proctored testing!  That's right... 100% online!

 

You can find all of the details on the RSA Certification Program page.  There's also a page specifically for the RSA NetWitness Platform certifications where you can find details about the certifications, try out one of the practice exams, register to take a certification and much, much more.  

 

RSA NetWitness has 4 separate certifications available:

  1. RSA NetWitness Logs and Network Admin
  2. RSA NetWitness Logs and Network Analyst
  3. RSA NetWitness Endpoint Admin
  4. RSA NetWitness Endpoint Analyst

 

I wish you all the best of luck and encourage you to continue your professional development by becoming certified on our technology.  

The RSA NetWitness Platform has an integrated agent available that currently does base Endpoint Detection and Response (EDR) functions but will shortly have more complete parity with ECAT (in V 11.x).  One beneficial feature of the Insights agent (otherwise called NWE Insights Agent) is Windows Log collection and forwarding. 

 

Here is the agent Install Guide for v11.2:

https://community.rsa.com/docs/DOC-96206

 

The Endpoint packager is built from the Endpoint Server (Admin > Services) where you can define your configuration options.  To enable windows log collection check the box at the bottom of the initial screen

 

This expands the options for Windows log collection...

Define one or more Log Decoder/Collector services in the current RSA NetWitness deployment to send the endpoint logs to (define a primary and secondary destination)

 

Define your channels to collect from

The default list includes 4 channels (System, Security, Application and ForwardedEvents)

You can also add any channel you want as long as you know the EXACT name of it

In the Enter Filter Option in the selection box enter the channel name

In this case Windows PowerShell (again, make sure you match to the exact Event Channel run into issues)

We could also choose to add some other useful event channels

  • Microsoft-Windows-Sysmon/Operational
  • Microsoft-Windows-TerminalServices-LocalSessionManager/Operational
  • Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational

 

You can choose to filter these channels to include or exclude certain events as well.

 

Finally, set the protocol to either UDP/TCP or TLS.

 

Generate Agent generates the download that includes the packager and the config files that define the agent settings.

 

From there you can build the agents for Windows, Linux and Mac from a local windows desktop.

Agents are installed as normal using local credentials or your package management tool of choice.

 

Now that you have windows events forwarded to your log decoders, make sure you have the Windows parser downloaded from RSA Live and deployed to your log decoders to start parsing the events.

The Windows parser is slightly different than the other windows log parsers (nic, snare, er) in that there are only 7 message sections (one each for the default channels and a TestEvent and Windows_Generic).

 

For the OOTB channels the Message section defines all the keys that could exist and then maps them to the table-map.xml values as well as the ec tags. 

Log Parser Customization 

 

The Windows_Generic is the catchall for this parser and any channel that is added custom will only parse from this section.  This catchall needs some help to make use of the keys that will come from the channels that we have selected which is where a windowsmsg-custom.xml (custom addition to the windows parser) comes in (internal feature enhancement as been added to make these OOTB)

 

Get the windows-custom parser from here:

GitHub - epartington/rsa_nw_log_windows: rsa windows parser for nw endpoint windows logs 

Add to your windows parser folder on the log decoder(s) that you configured in the endpoint config

/etc/netwitness/ng/envision/etc/devices/windows/

 

Reload your parsers.

Now you should have additional meta available for these additional event channels.

 

 

 

What happens if you want to change your logging configuration but don't want to re-roll an agent? In the Log Collection Guide here you can see how to add a new config file to the agent directory to update the channel information

https://community.rsa.com/docs/DOC-95781

(page 113)

 

Currently the free NW Endpoint Insights agent doesn't have agent config management included so this needs to be manual at the moment.  Future versions will include config management to make this change easier.

 

Now you can accomplish things like this:

Logs - Collecting Windows Events with WEC 

Without needing a WEC/WEF server especially if you are deploying Sysmon and want to use the NWE agent to pull back the event channel.

 

While you are in the Log Collection frame of mind, why not create a Profile in Investigation for NWE logs. 

Pre-Query = device.type='windows'

 

In 11.2 you can create a profile (which isn't new) as well as meta and column groups that are flexible (new in 11.2).  Which means the pre-query is locked but you are able to switch metagroups within the profile (very handy)

 

 

Hopefully this helpful addition to our agent reduces friction to collecting windows events.  If there are specific event channels that are high on the priority list for collection add them to the comments below and i'll get them added to internal RFE.

We at RSA value your thoughts and feedback on our products. Please tell us what you think about RSA NetWitness by participating directly in our upcoming user research studies. 

 

What's in it for you?

 

You will get a chance to play around with new and exciting features and help us shape the future of our product through your direct feedback. After submitting your information in the survey below, if you are a match for an upcoming study, one of our researchers will work with you in facilitating a study session either in a lab setting or remote. There are no right or wrong answers in our studies - every single piece of your feedback will help us improve the RSA NetWitness experience. Please join us in this journey by completing the short survey below so that we can see if you are a match for one of our studies.

 

This survey should take less than a minute of your time.

 

Take the survey here.

Filter Blog

By date: By tag: