Tuesday, November 13, 2018

Your Boss Wants You to Get Certified—Find Out Why That’s a Good Thing

Recently I received a mail from Cisco with the text in the title of this blog post. Obviously I was curious, so I read it. The report focused on "The Impact and Importance of Technical Certifications: The Management View". You can get the report here.

There has always been and (more than likely) will always be a debate about the relevance and importance of certification. Cisco focuses on this from a general certification perspective and Cisco specific perspective looking at both the employee and the organization. When it comes to this debate, I'm on Cisco's side of the argument. I place heavy emphasis on certification during my interview of potential candidates and as part of my team members career development. Now if you are thinking interviewing with me and showcasing that you have a ton of certification will get you the job, unfortunately I can tell you from now it more than likely will not as I do more than just focus on those certs. The certs tells me you are interested in your career development and thus took the effort, it does not confirm that you have all the knowledge or are the best fit for the job.

Anyhow rather than getting too long winded and deviate from the main focus of this post, here are the 10 take away I believe you should focus on from the report. Fortunately or unfortunately, I agree with all that the report says:


  1. Employees with technical certifications drive value.
  2. Certified employees are more effective (improved service quality), drive efficiency (operational excellence and return on investments) and are more engaged (workplace satisfaction, attraction and retention).
  3. Technical certifications helps your informal learning as well.
  4. Certified employees advances the knowledge of their colleagues through mentoring, collaboration, project contributions, etc.
  5. Certified team members teamed together provide benefits to each other and the organization.
  6. Technical training and the resulting certifications are seen as strong medicine for many of our technology workforce ills.
  7. Certified employees produce better results than their non-certified colleagues.
  8. Compared to non-certified team members, certified team members complete their tasks 31 percent faster, makes 29 percent fewer error and reduces the cost associated with projects by 29%.
  9.  Increased salary is typically a general benefit of gaining additional certifications.
  10. Become certified (and become knowledgeable) and you will be rewarded either by your current employer or the next.
I believe the above represents some very excellent reasons as to why we should all strive to achieve our certifications (be it technical or not) in a timely manner.

If you are reading this and have an opinion on certifications, then drop me a line in the comments section let's continue the conversation.

Saturday, November 10, 2018

Visualizing your Zeek (Bro) data with Splunk - x509.log (x509 Certificate logs)

Looking at x509 certificate information can be a good sign of the type of secure communication occuring in your environment. On most days you should expect to see some of the more popular Certification Authorities (CA) within your environment. Seeing a "strange" name maybe enough to trigger you to investigate that communication.

Let's first see the available fields as we have have always done before.

root@securitynik-host:/opt/bro/logs/current# bro-cut -C < x509.log | head --lines=10 --verbose
#fields ts      id      certificate.version     certificate.serial      certificate.subject     certificate.issuer      certificate.not_valid_before    certificate.not_valid_after   certificate.key_alg     certificate.sig_alg     certificate.key_type    certificate.key_length  certificate.exponent    certificate.curve     san.dns san.uri san.email       san.ip  basic_constraints.ca    basic_constraints.path_len
#types  time    string  count   string  string  string  time    time    string  string  string  count   string  string  vector[string]  vector[string]  vector[string]        vector[addr]    bool    count
1541376003.219940       FxRAX42VHMcyayZAi8      3       070FD92417F460AC        CN=*.google.com,O=Google LLC,L=Mountain View,ST=California,C=US CN=Google Internet Authority G3,O=Google Trust Services,C=US  1539704220.000000       1546965420.000000       id-ecPublicKey  sha256WithRSAEncryption ecdsa   256     -    prime256v1       *.google.com,*.android.com,*.appengine.google.com,*.cloud.google.com,*.g.co,*.gcp.gvt2.com,*.ggpht.cn,*.google-analytics.com,*.google.ca,*.google.cl,*.google.co.in,*.google.co.jp,*.google.co.uk,*.google.com.ar,*.google.com.au,*.google.com.br,*.google.com.co,*.google.com.mx,*.google.com.tr,*.google.com.vn,*.google.de,*.google.es,*.google.fr,*.google.hu,*.google.it,*.google.nl,*.google.pl,*.google.pt,*.googleadapis.com,*.googleapis.cn,*.googlecommerce.com,*.googlevideo.com,*.gstatic.cn,*.gstatic.com,*.gstaticcnapps.cn,*.gvt1.com,*.gvt2.com,*.metric.gstatic.com,*.urchin.com,*.url.google.com,*.youtube-nocookie.com,*.youtube.com,*.youtubeeducation.com,*.youtubekids.com,*.yt.be,*.ytimg.com,android.clients.google.com,android.com,developer.android.google.cn,developers.android.google.cn,g.co,ggpht.cn,goo.gl,google-analytics.com,google.com,googlecommerce.com,source.android.google.cn,urchin.com,www.goo.gl,youtu.be,youtube.com,youtubeeducation.com,youtubekids.com,yt.be -       -       -       F       -
1541376005.435851       FjKF7e2fCp33DNHup1      3       2D0000CDC4C84DD1293BFC9BB400000000CDC4  CN=*.msedge.net CN=Microsoft IT TLS CA 5,OU=Microsoft IT,O=Microsoft Corporation,L=Redmond,ST=Washington,C=US 1507851234.000000       1570923234.000000       rsaEncryption   sha256WithRSAEncryption rsa     2048    65537-*.msedge.net,*.a-msedge.net,a-msedge.net,b-msedge.net,*.b-msedge.net,c-msedge.net,*.c-msedge.net,dc-msedge.net,*.dc-msedge.net,*.lbas.msedge.net,*.test.msedge.net,*.azp.footprintdns.com,*.footprintdns.com,*.clo.footprintdns.com,*.any.footprintdns.com,*.nrb.footprintdns.com,*.perf.msedge.net   -       -       -    --

Now that the fields are identified, this Splunk search will help us to extract these fields.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/x509.log" NOT "#fields" 
|  rex field=_raw "(?<ts>.*?\t)(?<id>.*?\t)(?<certificate_version>.*?\t)(?<certificate_serial>.*?\t)(?<certificate_subject>.*?\t)(?<certificate_issuer>.*?\t)(?<certificate_not_valid_before>.*?\t)(?<certificate_not_valid_after>.*?\t)(?<certificate_key_alg>.*?\t)(?<certificate_sig_alg>.*?\t)(?<certificate_key_type>.*?\t)(?<certificate_key_length>.*?\t)(?<certificate_exponent>.*?\t)(?<certificate_curve>.*?\t)(?<san_dns>.*?\t)(?<san_uri>.*?\t)(?<san_email>.*?\t)(?<san_ip>.*?\t)(?<basic_constraints_ca>.*?\t)" 
|  stats count by ts,id,certificate_version,certificate_serial,certificate_subject,certificate_issuer,certificate_not_valid_before,certificate_not_valid_after,certificate_key_alg,certificate_sig_alg,certificate_key_type,certificate_key_length,certificate_exponent,certificate_curve,san_dns,san_uri,san_email,san_ip,basic_constraints_ca



Now that we have those fields extracted, like always, let pick on one of the fields. In this example, let's look at the certificate subject.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/x509.log" NOT "#fields" 
|  rex field=_raw "(?<ts>.*?\t)(?<id>.*?\t)(?<certificate_version>.*?\t)(?<certificate_serial>.*?\t)(?<certificate_subject>.*?\t)(?<certificate_issuer>.*?\t)(?<certificate_not_valid_before>.*?\t)(?<certificate_not_valid_after>.*?\t)(?<certificate_key_alg>.*?\t)(?<certificate_sig_alg>.*?\t)(?<certificate_key_type>.*?\t)(?<certificate_key_length>.*?\t)(?<certificate_exponent>.*?\t)(?<certificate_curve>.*?\t)(?<san_dns>.*?\t)(?<san_uri>.*?\t)(?<san_email>.*?\t)(?<san_ip>.*?\t)(?<basic_constraints_ca>.*?\t)" 
|  stats count by certificate_subject 
|  sort - count


























As always, we take a look at those least seen issuers.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/x509.log" NOT "#fields" 
|  rex field=_raw "(?<ts>.*?\t)(?<id>.*?\t)(?<certificate_version>.*?\t)(?<certificate_serial>.*?\t)(?<certificate_subject>.*?\t)(?<certificate_issuer>.*?\t)(?<certificate_not_valid_before>.*?\t)(?<certificate_not_valid_after>.*?\t)(?<certificate_key_alg>.*?\t)(?<certificate_sig_alg>.*?\t)(?<certificate_key_type>.*?\t)(?<certificate_key_length>.*?\t)(?<certificate_exponent>.*?\t)(?<certificate_curve>.*?\t)(?<san_dns>.*?\t)(?<san_uri>.*?\t)(?<san_email>.*?\t)(?<san_ip>.*?\t)(?<basic_constraints_ca>.*?\t)" 
|  stats count by certificate_issuer
|  sort - count 
| rare limit=50 certificate_issuer
























Let's wrap this up by looking at the DNS information within these certificates.

This filter allows us to extract that dns information. As you can see in the search filter, I have whitelisted out some domains.

<!-- HTML generated using hilite.me --><div style="background: #111111; overflow:auto;width:auto;border:solid gray;border-width:.1em .1em .1em .8em;padding:.2em .6em;"><pre style="margin: 0; line-height: 125%"><span style="color: #ffffff">index=_* OR index=* sourcetype=Bro-Security-Monitoring source=&quot;/opt/bro/logs/current/x509.log&quot; NOT(&quot;#fields&quot; OR &quot;.comodo.com&quot; OR &quot;.google.com&quot; OR &quot;.microsoft.com&quot; OR &quot;.windows.com&quot;) </span>
<span style="color: #ffffff">|  rex field=_raw &quot;(?&lt;ts&gt;.*?\t)(?&lt;id&gt;.*?\t)(?&lt;certificate_version&gt;.*?\t)(?&lt;certificate_serial&gt;.*?\t)(?&lt;certificate_subject&gt;.*?\t)(?&lt;certificate_issuer&gt;.*?\t)(?&lt;certificate_not_valid_before&gt;.*?\t)(?&lt;certificate_not_valid_after&gt;.*?\t)(?&lt;certificate_key_alg&gt;.*?\t)(?&lt;certificate_sig_alg&gt;.*?\t)(?&lt;certificate_key_type&gt;.*?\t)(?&lt;certificate_key_length&gt;.*?\t)(?&lt;certificate_exponent&gt;.*?\t)(?&lt;certificate_curve&gt;.*?\t)(?&lt;san_dns&gt;.*?\t)(?&lt;san_uri&gt;.*?\t)(?&lt;san_email&gt;.*?\t)(?&lt;san_ip&gt;.*?\t)(?&lt;basic_constraints_ca&gt;.*?\t)&quot; </span>
<span style="color: #ffffff">|  stats count by san_dns</span>
<span style="color: #ffffff">|  sort -count</span>
</pre></div>




















Ok then. That's it for this series. If you are reading this and would like me to extract other logs, feel free to drop me a line.

The next set of posts on Zeek (Bro) will more than likely be around signatures and scripts.

Hope you enjoyed this series.

Posts in this series:
Visualizing your Zeek (Bro) data with Splunk - The Setup
Visualizing your Zeek (Bro) data with Splunk - conn.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - http.log (http logs)
Visualizing your Zeek (Bro) data with Splunk - dns.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - x509.log (connection logs)

Visualizing your Zeek (Bro) data with Splunk - dns.log (dns logs)

DNS logs are one of the most critical logs into what is going on in your environment. Like HTTP, there is a push towards encrypting DNS traffic also. This is being pushed by OpenDNS, via its DNSCrypt, Cloudflare and now IETF.

Similarly to the previous posts, we need to understand the structure of the DNS logs before we begin to parse it. Let's do that.

root@securitynik-host:/opt/bro/logs/current# bro-cut -C < dns.log | head --lines=10 --verbose
#fields ts      uid     id.orig_h       id.orig_p       id.resp_h       id.resp_p       proto   trans_id        rtt     query   qclass  qclass_name     qtypeqtype_name       rcode   rcode_name      AA      TC      RD      RA      Z       answers TTLs    rejected
#types  time    string  addr    port    addr    port    enum    count   interval        string  count   string  count   string  count   string  bool    bool bool     bool    count   vector[string]  vector[interval]        bool
1541376001.059940       CvWBmNcrGUcDWf60g       192.168.0.26    52542   208.67.222.222  53      udp     43942   0.047960        ssl.gstatic.com 1       C_INTERNET    28      AAAA    0       NOERROR F       F       T       T       0       2607:f8b0:400b:80e::2003        300.000000      F
1541376003.063949       C2xxe74mFxdHKzMfHj      192.168.0.26    50157   208.67.222.222  53      udp     1261    0.039924        play.google.com 1       C_INTERNET    1       A       0       NOERROR F       F       T       T       0       172.217.0.110   300.000000      F

Let's now identify the Splunk query that will extract these fields.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/dns.log" NOT "#fields" NOT "\\x00\\x00\\x00\\x00" NOT "ip6.arpa"
| rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<proto>.*?\t)(?<trans_id>.*?\t)(?<rtt>.*?\t)(?<query>.*?\t)(?<qclass>.*?\t)(?<qclass_name>.*?\t)(?<qtype>.*?\t)(?<qtype_name>.*?\t)(?<rcode>.*?\t)(?<rcode_name>.*?\t)(?<aa>.*?\t)(?<tc>.*?\t)(?<rd>.*?\t)(?<ra>.*?\t)(?<z>.*?\t)(?<answers>.*?\t)(?<ttls>.*?\t)"
| stats count by ts,uid,orig_h,orig_p,resp_h,resp_p,proto,trans_id,rtt,query,qclass,qclass_name,qtype,qtype_name,rcode,rcode_name,aa,tc,rd,ra,z,answers,ttls



Now that we have all the fields extracted, as we were able to do previously, we can now obtain statistics on specific fields. Let's first take a look at the top 50 domains seen in our "dns.log" file.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/dns.log" NOT "#fields" NOT "\\x00\\x00\\x00\\x00" NOT ".arpa" 
| rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<proto>.*?\t)(?<trans_id>.*?\t)(?<rtt>.*?\t)(?<query>.*?\t)(?<qclass>.*?\t)(?<qclass_name>.*?\t)(?<qtype>.*?\t)(?<qtype_name>.*?\t)(?<rcode>.*?\t)(?<rcode_name>.*?\t)(?<aa>.*?\t)(?<tc>.*?\t)(?<rd>.*?\t)(?<ra>.*?\t)(?<z>.*?\t)(?<answers>.*?\t)(?<ttls>.*?\t)" 
| stats count by query 
| sort -count limit=50



As always, similar to how you pay attention to the top domains seen, it is also important to look at the least seen. Let's use the following query to get that information. Remember, unique values can stand out in such a way that makes you wonder why it is there.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/dns.log" NOT "#fields" NOT "\\x00\\x00\\x00\\x00" NOT ".arpa" 
| rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<proto>.*?\t)(?<trans_id>.*?\t)(?<rtt>.*?\t)(?<query>.*?\t)(?<qclass>.*?\t)(?<qclass_name>.*?\t)(?<qtype>.*?\t)(?<qtype_name>.*?\t)(?<rcode>.*?\t)(?<rcode_name>.*?\t)(?<aa>.*?\t)(?<tc>.*?\t)(?<rd>.*?\t)(?<ra>.*?\t)(?<z>.*?\t)(?<answers>.*?\t)(?<ttls>.*?\t)" 
| rare limit=50 query


























Now that's it for visualizing Zeek (Bro) DNS data. See you in the next post where we look at x509 logs.

References:
https://www.opendns.com/about/innovations/dnscrypt/
https://www.cloudflare.com/learning/dns/what-is-1.1.1.1/
https://www.rfc-editor.org/rfc/rfc7858.txt


Posts in this series:
Visualizing your Zeek (Bro) data with Splunk - The Setup
Visualizing your Zeek (Bro) data with Splunk - conn.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - http.log (http logs)
Visualizing your Zeek (Bro) data with Splunk - dns.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - x509.log (connection logs)

Visualizing your Zeek (Bro) data with Splunk - conn.log (connection logs)

To be able to visualize this data, we first need to understand it's structure. Zeek's (Bro's) data by default are in a tab delimited format. To verify this, let's look at a sample connection log - conn.log.

The tool we will use to help us look at Zeek's (Bro's) data is "bro-cut". As always, you should look at the help of your tools before you utilize them.


root@securitynik-host:/opt/bro/logs/current# bro-cut --help

bro-cut [options] [<columns>]

Extracts the given columns from an ASCII Bro log on standard input.
If no columns are given, all are selected. By default, bro-cut does
not include format header blocks into the output.

Example: cat conn.log | bro-cut -d ts id.orig_h id.orig_p

    -c       Include the first format header block into the output.
    -C       Include all format header blocks into the output.
    -d       Convert time values into human-readable format.
    -D <fmt> Like -d, but specify format for time (see strftime(3) for syntax).
    -F <ofs> Sets a different output field separator.
    -n       Print all fields *except* those specified.
    -u       Like -d, but print timestamps in UTC instead of local time.
    -U <fmt> Like -D, but print timestamps in UTC instead of local time.

For time conversion option -d or -u, the format string can be specified by
setting an environment variable BRO_CUT_TIMEFMT.

For this series of posts we will focus on the "-C". This allows us the opportunity to see the field headers. This is important as we need to know the placement of the fields to properly parse them in Splunk.

Now as you may see above in the "Example: cat conn.log | bro-cut ...", Zeek's (Bro's), input typically comes from the output of "cat" which is piped in. We will do it a slightly different way, we will instead use the "conn.log" (and future logs files) and provide it as an input to "bro-cut" using "<".

Let's get going with looking at the structure of the "conn.log" file.


root@securitynik-host:/opt/bro/logs/current# bro-cut -C < conn.log | head --lines=10 --verbose
....
#fields ts      uid     id.orig_h       id.orig_p       id.resp_h       id.resp_p       proto   service duration        orig_bytes      resp_bytes      conn_state    local_orig      local_resp      missed_bytes    history orig_pkts       orig_ip_bytes   resp_pkts       resp_ip_bytes   tunnel_parents
#types  time    string  addr    port    addr    port    enum    string  interval        count   count   string  bool    bool    count   string  count   countcount    count   set[string]
1541350796.901974       C54zqz17PXuBv3HkLg      192.168.0.26    54855   54.85.115.89    443     tcp     ssl     0.153642        1147    589     SF      T    F0       ShADadfF        7       1439    8       921     (empty)
1541350796.904578       CFsKQb2ZSp2qo1jf7a      192.168.0.26    54856   54.85.115.89    443     tcp     ssl     0.195532        1127    489     SF      T    F0       ShADadfF        7       1419    8       821     (empty)

As we can see above and to the right of "#fields", there are a number of fields starting with "ts", "uid", "id.orig_h", "id.orig_p", etc. Now that we know the structure, let's go back to Splunk and build (extract) these out.

Our search filter for Splunk is now:

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/conn.log" NOT "#fields"
| rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<proto>.*?\t)(?<service>.*?\t)(?<duration>.*?\t)(?<orig_bytes>.*?\t)(?<resp_bytes>.*?\t)(?<conn_state>.*?\t)(?<local_orig>.*?\t)(?<local_resp>.*?\t)(?<missed_bytes>.*?\t)(?<history>.*?\t)(?<orig_pkts>.*?\t)(?<orig_ip_bytes>.*?\t)(?<resp_pkts>.*?\t)(?<resp_ip_bytes>.*?\t)" 
|  stats count by ts,uid,orig_h,orig_p,resp_h,resp_p,proto,service,duration,orig_bytes,resp_bytes,conn_state,local_orig,local_resp,missed_bytes,history,orig_pkts,orig_ip_bytes,resp_pkts,resp_ip_bytes

This filter above extract all fields except the "tunnel_parents". I had an error being reported by Splunk about needing to reconfigure the "limits.conf" file when I added this field. I was not in the mood for troubleshooting this issue as it is not a priority at this time.

Here is a sample screenshot of all the fields extracted.



Once we have extracted all the fields, then what we do with each of them is up to us. Let's expand on this a bit more by looking first for the top 100 source IPs seen by Zeek (Bro)

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/conn.log" NOT "#fields"
| rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<proto>.*?\t)(?<service>.*?\t)(?<duration>.*?\t)(?<orig_bytes>.*?\t)(?<resp_bytes>.*?\t)(?<conn_state>.*?\t)(?<local_orig>.*?\t)(?<local_resp>.*?\t)(?<missed_bytes>.*?\t)(?<history>.*?\t)(?<orig_pkts>.*?\t)(?<orig_ip_bytes>.*?\t)(?<resp_pkts>.*?\t)(?<resp_ip_bytes>.*?\t)" 
|  stats count by orig_h 
| sort -count limit=100



























The above gives us the opportunity to identify the top 100 IP addresses.

However, similarly to know what the top IP addresses are in your environment, it is also critical that you know what the unique or rare ones are. To help us with this let's run another filter.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/conn.log" NOT "#fields" NOT src_ip=192.168.0.0/24 NOT src_ip=0.0.0.0
| rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<proto>.*?\t)(?<service>.*?\t)(?<duration>.*?\t)(?<orig_bytes>.*?\t)(?<resp_bytes>.*?\t)(?<conn_state>.*?\t)(?<local_orig>.*?\t)(?<local_resp>.*?\t)(?<missed_bytes>.*?\t)(?<history>.*?\t)(?<orig_pkts>.*?\t)(?<orig_ip_bytes>.*?\t)(?<resp_pkts>.*?\t)(?<resp_ip_bytes>.*?\t)" 
|  stats count by orig_h 
| rare limit=25 orig_h

This time we visualize using a pie chart.



























As you see in the first extraction, we used a table view. In the second we used a pie chart. Feel free to experiment with what best suits you.

Let's move on to the top source and destination IP pairs along with the destination ports on which the communication is occurring.


index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/conn.log" NOT "#fields" NOT dst_ip=192.168.0.0/24 NOT dst_ip=208.67.222.222 NOT dst_ip=208.67.220.220 NOT dst_ip=224.0.0.0/8 NOT dst_ip=239.0.0.0/8 NOT dst_ip=255.255.255.255 NOT src_ip=0.0.0.0
| rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<proto>.*?\t)(?<service>.*?\t)(?<duration>.*?\t)(?<orig_bytes>.*?\t)(?<resp_bytes>.*?\t)(?<conn_state>.*?\t)(?<local_orig>.*?\t)(?<local_resp>.*?\t)(?<missed_bytes>.*?\t)(?<history>.*?\t)(?<orig_pkts>.*?\t)(?<orig_ip_bytes>.*?\t)(?<resp_pkts>.*?\t)(?<resp_ip_bytes>.*?\t)" 
|  stats count by orig_h,resp_h,resp_p 
| dedup orig_h,resp_h,resp_p 
| sort -count

The above produces the following:



























At this point, let's wrap this up. As you have already extracted all the fields above, you can basically use any of these fields you wish to gain statistics on your environment. I recommend that you look at the destination IPs and ports also.

See you in the next post where we focus on the "http.log" file.

Posts in this series:
Visualizing your Zeek (Bro) data with Splunk - The Setup
Visualizing your Zeek (Bro) data with Splunk - conn.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - http.log (http logs)
Visualizing your Zeek (Bro) data with Splunk - dns.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - x509.log (connection logs)


Visualizing your Zeek (Bro) data with Splunk - http.log (http logs)

The HTTP logs be it from your web server or any other source should be an area where great focus is placed. We do a large number of communications online and with the continued push to the cloud, monitoring this traffic will become even more critical. There is obviously a challenge that comes with this also. There is a greater push to ensure more privacy on the internet and thus there is probably more HTTPS (encrypted HTTP) traffic now online rather than HTTP (unencrypted). There are even browsers that have started to mark HTTP sites as unsecure. Thus there are interesting challenges ahead with monitoring of network traffic. However, while we still have visbility into these logs, let's make the most of them.

Similarly to the connections logs (conn.log), we need to understand the structure of the http.log file. Once again, let's leverage "bro-cut"

root@securitynik-host:/opt/bro/logs/current# bro-cut -C < http.log | head --lines=10 --verbose

#fields ts      uid     id.orig_h       id.orig_p       id.resp_h       id.resp_p       trans_depth     method  host    uri     referrer        version user_agent    request_body_len        response_body_len       status_code     status_msg      info_code       info_msg        tags    username        password     proxied  orig_fuids      orig_filenames  orig_mime_types resp_fuids      resp_filenames  resp_mime_types
#types  time    string  addr    port    addr    port    count   string  string  string  string  string  string  count   count   count   string  count   stringset[enum]       string  string  set[string]     vector[string]  vector[string]  vector[string]  vector[string]  vector[string]  vector[string]
1541354421.339883       CvkNpf1KBPCrkB72k1      192.168.0.22    52512   134.19.176.32   80      1       GET     s2.startv.biz   /stalker_portal/server/load.php?type=watchdog&action=get_events&cur_play_type=0&event_active_id=0&init=0&JsHttpRequest=1-xml& -       1.1     Mozilla/5.0 (QtEmbedded; U; Linux; C) AppleWebKit/533.3 (KHTML, like Gecko) MAG200 stbapp ver: 2 rev: 250 Safari/533.3        0       164     200     OK      -       -       (empty) -       -       -    --       -       FAO2AESJgAeFeJRHl       -       text/json
1541354421.319882       CbqzWP1bslQsNBC8qe      192.168.0.22    55654   134.19.176.32   80      2       GET     s2.startv.biz   /stalker_portal/server/load.php?type=watchdog&action=get_events&cur_play_type=0&event_active_id=0&init=0&JsHttpRequest=1-xml& -       -       Mozilla/5.0 (QtEmbedded; U; Linux; C) AppleWebKit/533.3 (KHTML, like Gecko) MAG200 stbapp ver: 2 rev: 250 Safari/533.3        0       0       -       -       -       -       (empty) -       -       -    --       -       -       -       -

Now that we have the structure, the following search can be used to extract those field in Splunk.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/http.log" NOT "#fields"
|  rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<trans_depesponse_body_len>.*?\t)(?<status_code>.*?\t)(?<status_msg>.*?\t)(?<info_code>.*?\t)(?<info_msg>.*?\t)(?<tags>.*?\t)(?<u_fuids>.*?\t)(?<resp_filenames>.*?\t)(?<resp_mime_types>.*?\t)" 
|  stats count by ts,uid,orig_h,orig_p,resp_h,resp_p,trans_depesponse_body_len,status_code,status_msg,info_code,info_msg,tags,u_fuids,resp_filenames,resp_mime_types

The output below represents a snapshot of all the fields extracted.




















Once again, once all the fields have been extracted we are then in a position to gain statistis on each field. I always am a big believer in tracking user agents, they provide insights into tools and applications being seen in your environment. This search filter allows you to track those user agents and the IPs they are associated with the applications on which they are being used.

index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/http.log" NOT "#fields"
|  rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<trans_depesponse_body_len>.*?\t)(?<status_code>.*?\t)(?<status_msg>.*?\t)(?<info_code>.*?\t)(?<info_msg>.*?\t)(?<tags>.*?\t)(?<u_fuids>.*?\t)(?<resp_filenames>.*?\t)(?<resp_mime_types>.*?\t)" 
|  stats count by orig_h,u_fuids 
|  dedup orig_h,u_fuids 
| sort -count

The search filter above produces:


























Let's wrap this up by looking at the "rare" or better yet unique user agents. These unique user agents can help you detect sooner the signs of a possible compromise. If not a compromise, it can help you recognize when tools are used against your environment. The following filter will help you with identifying the unique user agents.


index=_* OR index=* sourcetype=Bro-Security-Monitoring source="/opt/bro/logs/current/http.log" NOT("microsoft.com" OR "dell.com" OR "adobe.com" OR "splunk.com" OR "firefox.com" OR "portableapps.com" OR "stariptv" OR "blogspot" OR "comodo.com" OR "WINDOWS.COM")
|  rex field=_raw "(?<ts>.*?\t)(?<uid>.*?\t)(?<orig_h>.*?\t)(?<orig_p>.*?\t)(?<resp_h>.*?\t)(?<resp_p>.*?\t)(?<trans_depesponse_body_len>.*?\t)(?<status_code>.*?\t)(?<status_msg>.*?\t)(?<info_code>.*?\t)(?<info_msg>.*?\t)(?<tags>.*?\t)(?<u_fuids>.*?\t)(?<resp_filenames>.*?\t)(?<resp_mime_types>.*?\t)" 
|  stats count by status_msg 
| sort -count 
|  rare limit=50 status_msg

Note that this filter whitelists some values such as anything to do with "microsoft.com", "dell.com", etc. The search then produced.



























Well that's it for this post. Once again, we first extracted all the fields. As a result of that extraction, we are now able to utilize any of the fields as we see fit.


See you in the next post where we look at DNS logs.


References:
https://security.googleblog.com/2018/02/a-secure-web-is-here-to-stay.html
https://blog.chromium.org/2017/04/next-steps-toward-more-connection.html

Posts in this series:
Visualizing your Zeek (Bro) data with Splunk - The Setup
Visualizing your Zeek (Bro) data with Splunk - conn.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - http.log (http logs)
Visualizing your Zeek (Bro) data with Splunk - dns.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - x509.log (connection logs)

Visualizing your Zeek (Bro) data with Splunk - The Setup

In the two (1,2) previous post which were done on Bro, we focused on installing Bro and configuring Bro.

Since then, I've learnt that Bro has now been renamed to Zeek. Feel free to read more about the name change here.

In this series of post, we focus on visualizing some of the data that Bro has produced. As we continue building on this series in the future, we will look at writing some basic bro signatures and scripts.

To help us with visualizing this data, we will be working with Splunk. Let's first configure Splunk to ingest the data. At this point, I'm assuming you already have Splunk installed. In my example, Splunk is running on the same machine that Bro is on. Let's configure Splunk's "Inputs.conf".

securitynik@securitynik-host:#cd /opt/splunk/etc/apps/search
securitynik@securitynik-host:/opt/splunk/etc/apps/search#vi local/inputs.conf

[monitor:///opt/bro/logs/current]
disabled = false
host = securitynik-monitoring-bro
whitelist = \.log$
sourcetype = Bro-Security-Monitoring

Now that we have Splunk configured to ingest the Bro Data, let's now move to building our first Widget for the Dashboard. Similarly to assuming you have Splunk installed, I am assuming you have a Dashboard. If you don't have that and need guidance on how to set one up, drop me a line I can put together a quick post.

See you in our first widget where we focus on Zeek's (Bro) conn.log - connection logs

Posts in this series:
Visualizing your Zeek (Bro) data with Splunk - The Setup
Visualizing your Zeek (Bro) data with Splunk - conn.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - http.log (http logs)
Visualizing your Zeek (Bro) data with Splunk - dns.log (connection logs)
Visualizing your Zeek (Bro) data with Splunk - x509.log (connection logs)

Friday, November 2, 2018

Spoofing/Replaying IBM QRadar packets/flows - tcpreplay (the more interesting way)


This post is a continuation of this previous post. In the previous post, we looked at obtaining packets/flow data without the need for additional tools. In this post, we have to do a bit more, but we will also be able to achieve a lot more. Looks now focus on method 2.

Method 2:
This second method as you may recognize is a bit more convoluted but still gets the job done. I believe also it puts you in a much better position to do more than method 1.

To get this started, we need some sample packets. Feel free to download these from any websites you wish. I have put some in the references. However, for this I will focus on packets which I have online and which have been used in my upcoming book.

Let's use "git" to "clone" this package. First I will make a directory to store the download. This directory is named "downloadedPackets". Once created, I then "cd" into that directory.

[securitynik@qradarCE ~]# mkdir downloadedPackets
[securitynik@qradarCE ~]# cd downloadedPackets/

Do note, once this directory is cloned, there will be more in there than just packets. If you plan to get a copy of my book, this maybe a great opportunity to get insights into what the packets are doing :-). You can grab the sample chapters here.

[securitynik@qradarCE downloadedPackets]# git clone https://github.com/SecurityNik/SUWtHEh-.git
Cloning into 'SUWtHEh-'...
remote: Enumerating objects: 90, done.
remote: Total 90 (delta 0), reused 0 (delta 0), pack-reused 90
Unpacking objects: 100% (90/90), done.

Now that the directory has been cloned, I then "cd" into this directory. I then perform a "ls" and "wc" to learn hown many .pcap files are in this folder.

[securitynik@qradarCE SUWtHEh-]#cd SUWtHEh-/
[securitynik@qradarCE SUWtHEh-]# ls --all -l *.pcap | wc --lines
21

Above we see 21 pcap files.

Since "tcpreplay" is not installed on QRadar Community Edition, let's add it.

First let's install "libpcap-devel" via "yum"

[securitynik@qradarCE ~]# yum install libpcap-devel
....
--> Running transaction check
---> Package libpcap-devel.x86_64 14:1.5.3-11.el7 will be installed
--> Finished Dependency Resolution
....
Install  1 Package

Total download size: 118 k
Installed size: 163 k
Is this ok [y/d/N]: y
....
Installed:
  libpcap-devel.x86_64 14:1.5.3-11.el7

Complete!

Now that we have "libpcap-devel", let's next get "tcpreplay" from this link.

[securitynik@qradarCE ~]# wget https://github.com/appneta/tcpreplay/releases/download/v4.2.6/tcpreplay-4.2.6.tar.gz
--2018-11-02 20:11:32--  https://github.com/appneta/tcpreplay/releases/download/v4.2.6/tcpreplay-4.2.6.tar.gz
Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: 
.......
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.100.43
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.100.43|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3494827 (3.3M) [application/octet-stream]
Saving to: ‘tcpreplay-4.2.6.tar.gz’

100%[===========================================================================================================>] 3,494,827   3.01MB/s   in 1.1s

2018-11-02 20:11:34 (3.01 MB/s) - ‘tcpreplay-4.2.6.tar.gz’ saved [3494827/3494827]

Once we have "tcpreplay-4.2.6.tar.gz", let's go ahead and untar, "configure", "make" and "make install", so that we can use "tcpreplay".

[securitynik@qradarCE ~]# tar -zxvf tcpreplay-4.2.6.tar.gz
tcpreplay-4.2.6/
tcpreplay-4.2.6/Makefile.am
tcpreplay-4.2.6/docs/
tcpreplay-4.2.6/docs/Makefile.am
tcpreplay-4.2.6/docs/Win32Readme.txt
tcpreplay-4.2.6/docs/HACKING
tcpreplay-4.2.6/docs/Makefile.in
..............

Let's now switch to the "tcpreplay-4.2.6" directory.

[securitynik@qradarCE ~]# cd tcpreplay-4.2.6
[securitynik@qradarCE tcpreplay-4.2.6]# ./configure
checking whether to enable maintainer-
.............
##########################################################################
             TCPREPLAY Suite Configuration Results (4.2.6)
##########################################################################
libpcap:                    /usr (>= 0.9.6)
PF_RING libpcap             no
libdnet:                    no
autogen:                     (unknown - man pages will not be built)
Use libopts tearoff:        yes
64bit counter support:      yes
tcpdump binary path:        /usr/sbin/tcpdump
fragroute support:          no
tcpbridge support:          yes
tcpliveplay support:        yes

Supported Packet Injection Methods (*):
Linux TX_RING:              no
Linux PF_PACKET:            yes
BSD BPF:                    no
libdnet:                    no
pcap_inject:                yes
pcap_sendpacket:            yes **
pcap_netmap                 no
Linux/BSD netmap:           no
Tuntap device support:      yes

* In order of preference; see configure --help to override
** Required for tcpbridge

************************************************************

Next up, time to execute make then make install. Let's run both together. If "make" runs successfully, only then will "make install run"

[securitynik@qradarCE tcpreplay-4.2.6]# make && make install


Now that "tcpreplay" is installed, let's go ahead and replay some of our packet captures.
Let's go back into our folder where our packets are.


[securitynik@qradarCE ~]# cd downloadedPackets/SUWtHEh-/

As always, before running any of these commands, you should look at the help or man pages. Here is a snapshot of the help.

[securitynik@qradarCE SUWtHEh-]# tcpreplay --help
tcpreplay (tcpreplay) - Replay network traffic stored in pcap files
Usage:  tcpreplay [ -<flag> [<val>] | --<name>[{=| }<val>] ]... <pcap_file(s)>

   -q, --quiet                Quiet mode
   -T, --timer=str            Select packet timing mode: select, ioport, gtod, nano
       --maxsleep=num         Sleep for no more then X milliseconds between packets
   -v, --verbose              Print decoded packets via tcpdump to STDOUT
   -A, --decode=str           Arguments passed to tcpdump decoder
                                - requires the option 'verbose'
   -K, --preload-pcap         Preloads packets into RAM before sending
   -c, --cachefile=str        Split traffic via a tcpprep cache file
                                - requires the option 'intf2'
                                -- and prohibits the option 'dualfile'
   -2, --dualfile             Replay two files at a time from a network tap
                                - requires the option 'intf2'
                                -- and prohibits the option 'cachefile'
   -i, --intf1=str            Client to server/RX/primary traffic output interface
   -I, --intf2=str            Server to client/TX/secondary traffic output interface
       --listnics             List available network interfaces and exit
   -l, --loop=num             Loop through the capture file X times
                                - it must be in the range:
....

Let's use the "listnics" argument for "tcpreplay" to see what are the interfaces it has identified.

[securitynik@qradarCE SUWtHEh-]# tcpreplay --listnics
Available network interfaces:
docker0
appProxy
dockerInfra
dockerApps
vethbe3a5ae
veth679a5ac
veth17c3fda
ens33
veth39ffa79
vetha7d6436
veth0d9817d
any
nflog
nfqueue
usbmon1
usbmon2

As we can see above, "ens33" interface is available. Let's replay on this interface since it is already configured in method 1 (see previous post) for receiving flows.

Let's look at the pcaps which are available.


[securitynik@qradarCE SUWtHEh-]# ls --all -l *.pcap
-rw-r--r-- 1 securitynik securitynik  1018617 Nov  2 20:05 enum4linux_v.pcap
-rw-r--r-- 1 securitynik securitynik     1771 Nov  2 20:05 hydra_port_21.pcap
-rw-r--r-- 1 securitynik securitynik     9928 Nov  2 20:05 hydra_port_22.pcap
-rw-r--r-- 1 securitynik securitynik     7004 Nov  2 20:05 hydra_port_23.pcap
-rw-r--r-- 1 securitynik securitynik  1471289 Nov  2 20:05 hydra_port_445.pcap
-rw-r--r-- 1 securitynik securitynik   280812 Nov  2 20:05 metasploitable_9999_SUWtHEh.pcap
-rw-r--r-- 1 securitynik securitynik    62192 Nov  2 20:05 metasploitable_Telnet_SUWTHEh.pcap
-rw-r--r-- 1 securitynik securitynik   987362 Nov  2 20:05 MS17_010 - exploit.pcap
-rw-r--r-- 1 securitynik securitynik    57005 Nov  2 20:05 nbtscan.pcap
-rw-r--r-- 1 securitynik securitynik    13708 Nov  2 20:05 nbtscan-v.pcap
-rw-r--r-- 1 securitynik securitynik  4466911 Nov  2 20:05 nmap_host_scan_tcp.pcap
-rw-r--r-- 1 securitynik securitynik   106552 Nov  2 20:05 nmap_ping_scan.pcap
-rw-r--r-- 1 securitynik securitynik     8852 Nov  2 20:05 nmap_script_smb_ms17-010.pcap
-rw-r--r-- 1 securitynik securitynik   862576 Nov  2 20:05 nmap_script_vuln_ms17-010.pcap
-rw-r--r-- 1 securitynik securitynik   192987 Nov  2 20:05 nmap_sn.pcap
-rw-r--r-- 1 securitynik securitynik   462865 Nov  2 20:05 wget_index.pcap
-rw-r--r-- 1 securitynik securitynik      116 Nov  2 20:05 Win10_1-2.pcap
-rw-r--r-- 1 securitynik securitynik 24950772 Nov  2 20:05 WinXP-172.pcap
-rw-r--r-- 1 securitynik securitynik   540552 Nov  2 20:05 WinXP-4444-1820.pcap
-rw-r--r-- 1 securitynik securitynik   119400 Nov  2 20:05 WinXP-445.pcap
-rw-r--r-- 1 securitynik securitynik 25049045 Nov  2 20:05 WinXP.pcap

Let's try the file "enum4linux_v.pcap".

[securitynik@qradarCE SUWtHEh-]# tcpreplay --intf1=ens33 enum4linux_v.pcap
.... [I had some errors here]
Actual: 5348 packets (933025 bytes) sent in 10.39 seconds
Rated: 89735.3 Bps, 0.717 Mbps, 514.35 pps
Statistics for network device: ens33
        Successful packets:        5341
        Failed packets:            7
        Truncated packets:         0
        Retried packets (ENOBUFS): 0
        Retried packets (EAGAIN):  0
************************************************************


Above we see there were 5341 packets successfully replayed.

Let's try another file. This time the large WinXP.pcap shown below with "25049045" bytes.


[securitynik@qradarCE SUWtHEh-]#tcpreplay --intf1=ens33 --mbps=10 WinXP.pcap
Actual: 24957 packets (24649709 bytes) sent in 19.72 seconds
Rated: 1249660.7 Bps, 9.99 Mbps, 1265.23 pps
Flows: 159 flows, 8.06 fps, 22135 flow packets, 2822 non-flow
Statistics for network device: ens33
        Successful packets:        24957
        Failed packets:            0
        Truncated packets:         0
        Retried packets (ENOBUFS): 0
        Retried packets (EAGAIN):  0


We can also put multiple files if we wish. Let's do that with this last set of replaying.


[securitynik@qradarCE SUWtHEh-]#tcpreplay --intf1=ens33 --mbps=10 --loop=10 WinXP-172.pcap nmap_host_scan_tcp.pcap metasploitable_Telnet_SUWTHEh.pcap hydra_port_445.pcap  MS17_010\ -\ exploit.pcap 2>/dev/null

Actual: 557910 packets (306752990 bytes) sent in 245.40 seconds
Rated: 1249999.8 Bps, 9.99 Mbps, 2273.44 pps
Flows: 197 flows, 0.80 fps, 10833000 flow packets, 17062500 non-flow
Statistics for network device: ens33
        Successful packets:        552980
        Failed packets:            4930
        Truncated packets:         0
        Retried packets (ENOBUFS): 0
        Retried packets (EAGAIN):  0


From above, we see we have 552980 packets which were successfully replayed. Unfortunately, we have over 4930 which failed.

By looking at the help file "tcpreplay --help" or the man pages "man tcpreplay" you should be able to understand what all of the arguments to tcpreplay does. However, for "2>/dev/null" all I'm doing here is taking any error messages which gets generated during the execution of this command to a black hole. Basically don't print error messages on the screen just discard them.

As we now look into QRadar "Network Activity" tab, we can see some of the packets coming in.




















Well hope you enjoyed these two sessions. Remember if you would really like to understand the pakcets and logs we download, feel free to download the sample chapters of the book here. Alternatively, I hope you grab a copy when it becomes available. :-)

Sample packets:
SecurityNik - Hack & Detect book sample packets and logs
Wireshark Sample Packets
NetResec
BE CAREFUL - Malware Sample from Malware-Traffic-Analysis.net

tcpreplay:
https://tcpreplay.appneta.com/wiki/installation.html
http://tcpreplay.synfin.net/wiki/tcpreplay