What I needed?
1. The time
2. The Juniper SA host
3. The user login source
4. The logon username
4. The from Address
This information is needed for both failed and successful logins.
What I had?
Jan 1 13:48:24 10.0.0.100 Juniper: 2014-04-01 13:49:47 - ive - [10.0.0.200] securitynik(User)[] - Login failed using auth server SECURITYNIK AD (Samba). Reason: Failed
Jan 1 13:48:24 10.0.0.100 Juniper: 2014-04-01 13:49:47 - ive - [10.0.0.200] securitynik(User)[] - Secondary authentication failed for securitynik/SecurityNik AD from 10.0.0.200
Jan 1 13:47:03 10.0.0.100 Juniper: 2014-04-01 13:48:26 - ive - [10.0.0.200]SECURITYNIK\securitynik(Others)[] - Login failed using auth server SECURITYNIK AD (Samba). Reason: Failed
Jan 1 13:47:03 10.0.0.100 Juniper: 2014-04-01 13:48:26 - ive - [10.0.0.200]SECURITYNIK\securitynik(Others)[] - Primary authentication failed for SECURITYNIK\securitynik/SecurityNik AD from 10.0.0.200
Jan 1 09:05:38 10.0.0.100 Juniper: 2014-04-01 09:07:01 - ive - [10.0.0.200] securitynik(User)[Admins, Support] - Login succeeded for securitynik/User (session:470852ff) from 10.0.0.200.
Now what am I suppose to do with all this? Especially when the only fields Splunk gave me which were of interest are the time and Juniper SA host?
Apparently I can do a lot? :-D
Once sometime was dedicated to understanding regex and how the rex command is used, everything fell into place.
Let’s see what the splunk filter looks like for a successful login
juniper Login succeeded for | rex field=_raw "\- ive \- \[(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\] (?<logon_user>.*\]).* from (?<from_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | table _time, host, src_ip, from_ip, logon_user | dedup src_ip
Let’s break this down:
Every successful logon has the keywords “Login succeeded for”
So, first, we are searching for the keywords “juniper” AND “Login succeeded for”
Second, the rex command begins with the field=_raw which means we will be parsing the raw data.
Third we named the field src_ip for the data which comes after “– ive – [“ using the regex \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})
Fourth, once the src_ip address is extracted, we then create a field called logon_user taking all the information that comes after the IP address ending with ].
Fifth, to extract the from ip, we create a field called from_ip by extracting the IP once again \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}).
Once all this is finished, we build a table showing the time, host (juniper) , src_ip, from_ip, logon_user. We then remove the duplicates based on src_ip.
The finished product?
Juniper - Successful Logons
_time host src_ip from_ip logon_user
2014-05-01 14:17:56 10.0.0.100 10.0.0.200 10.0.0.200 SECURITYNIK\securitynik(Others)[]
Now that we can extract our successful logins, let’s grab a search filter for the fail logons.
Building this out it is quite similar to the successful filter. However, we will make a few small changes to the search strings
juniper "Login failed using auth server securitynik AD" OR "Password realm restrictions failed" OR "Primary authentication failed" OR "Secondary authentication failed" | rex field=_raw "\- ive \- \[(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\] (?<username>.*\]).* from (?<from_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" | stats count by _time, host,src_ip,from_ip,username | dedup src_ip | sort count | reverse
There are a couple of different failed messages. As a result I try to incorporate all in the search filter to identify the failed logins.
The fields extracted are much the same. However, this time we use stats count to count the number of failed logins.
I must admit, once I figured out how easy this rex command can make your splunking :-) life, I am ready to extract more splunking data.
Additional Readings:
http://docs.splunk.com/Documentation/Splunk/6.0.3/SearchReference/Rex
No comments:
Post a Comment