Thursday, November 14, 2019

Come hang out at one of my upcoming classes, to expand your knowledge on Intrusion Detection, Incident Handling, Hacker Techniques & Exploits

Upcoming Courses Taught By Nik Alleyne
TypeCourse / LocationDateRegister

Training Event
SANS Zurich February 2020 Zurich, Switzerland
Feb 24, 2020 -
Feb 29, 2020

Training Event
SANS Amsterdam May 2020 Amsterdam, Netherlands
May 11, 2020 -
May 18, 2020

Training Event
SANS Paris June 2020 Paris, France
Jun 8, 2020 -
Jun 13, 2020

Summit
SANS Threat Hunting & IR Europe Summit & Training 2020 London, United Kingdom
Jan 13, 2020 -
Jan 19, 2020
*Course contents may vary depending upon location, see specific event description for details.

Build on your Red and Blue Team skills from a practical perspective while learning about the Cyber Kill Chain

It's finally here! If you are looking for the right book to help you expand your network forensics knowledge, this is the book you need.

In Hack and Detect we leverage the Cyber Kill Chain for practical hacking and more importantly it's detection leveraging network forensics. In this book you will use Kali and many of its tools including Metasploit to hack and then we do lots of detecting via logs and packet analysis. We also implement mitigation strategies for limit and or prevent future compromises.

Grab your copy from Amazon to learn more.
https://www.amazon.com/dp/1731254458





Alternatively, grab the updated and production ready sample chapters here to get a sneak peak of what you can expect.

NOTE: All sample logs, pcaps, vbscripts, etc can be found on the book's GitHub page located here: This means if you don't wish to build your own lab, you have all you need to follow along.

Alternatively, you can use this link: https://github.com/SecurityNik/SUWtHEh-


Do enjoy the read! Please do leave your comment on what you liked, what you don't like and most importantly, what I can do differently the next time if I decide to go down this road again. :-)

Beginning Machine Learning - Logistic Regression Algorithm - Titanic Dataset

While there have been many great tutorials online that I've used, this one is mostly from the "Machine Learning Full Course - Learn Machine Learning 10 Hours | Machine Learning Tutorial | Edureka". Some of the other sites I've used are also within the references:


Logistic Regression is used in situations where the outcome is binary, either True or False, on or off, yes or no , 0 or 1.

Whereas in linear regression the value to predict is continuous, in logistic regression the value to predict is categorical, i.e. on or off, yes, or no, 0 or 1, etc. Logistic regression solves a classification problem.
While linear regression uses a straight line, logistic regression uses an S-Curve. The S-Curve or the Sigmod function can be used to predict the Y value.


A good example where logistic regression is used is to predict the weather. It can also be done via linear regression.

The steps to be considered are:
 collect data -> Analyze the data -> perform Data Wrangling -> Setup our train and test sets -> do an accuracy check to see how our algorithm is performing

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
#!/usr/bin/env python3

from matplotlib import pyplot as plt
import numpy as np
import seaborn as sns
import math
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score

'''
This code is part of me continuing my machine learning journey and is focused on
logistic regression. 
Author: Nik Alleyne
Author Blog: www.securitynik.com
filename: titanicLogisticRegression.py

This uses the titanic dataset an example can be found at 
http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic3.xls

pclass - Passenger Class (1=st, 2=2nd, 3=3rd)
survived - (0 = No, 1 = Yes)
name - Name
sex - Sex
age - Age 
sibsp - Number of Siblings/Spouse Aboard 
parch - Number of parents/Children Aboard
ticket - Ticket Number
fare - Fare Passenger (British Pound)
cabin - Cabin
embarked - Port of Embarkation (C = Cherbourg; Q = Queens, S = Southhampton) 
boat - Lifeboat
body - Body Identification Number
home.dest - Home/Destination
'''


def main():
    # Read the Excel file
    df = pd.read_excel('./titanic.xls')

    #Let's drop a few columns which may not be relevant
    df.drop(['boat'], axis=1, inplace=True)    
    df.drop(['body'], axis=1, inplace=True)
    df.drop(['home.dest'], axis=1, inplace=True)

    #print information on the dataset
    print('[*] Information on the dataset \n {}'.format(df.info()))
    
    #Let's see if we can read a few of the records
    print('[*] First 10 records are: \n{}'.format(df.head(10)))
    print('[*] The total number of entries is:{}'.format(len(df.index)))

    # Let's visualize the data from different perspectives
    sns.set(style='darkgrid')
    sns.countplot(x='sex', data=df)
    plt.show()

    ''' 
    Interestingly when the above graph was shown, it showed that 
    it was almost 2 to one in terms of the number of men to women on 
    the titanic
    '''

    #Let's now look at the survivors vs non survivors
    sns.countplot(x = 'survived', data=df)
    plt.show()
    ''' 
    From the graphs returned here, it showed that just about 62% 
    of the passengers survived
    '''
    
    # From the survivors, how many were men vs women
    sns.countplot(x='survived', hue='sex', data=df)
    plt.show()
    '''
    The graph for this finding was important to me.
    Even thought there were more males on the titanic, a significant 
    number of females surived compared to males
    '''


    
    #Let's see how the passengers were distributed by class
    sns.countplot(x = 'pclass', data=df)
    plt.show()
    ''' 
    From the graph produced here, it showed majority of the passengers were
    in class 3 and surprisingly (to me) there were more passengers in 
    first class than second class

    '''

    # Now let's see if class had anything to do with their survival
    sns.countplot(x = 'survived', hue='pclass', data=df)
    plt.show()
    '''
    Believe it or not, it would seem like class made a different. 
    Those in 3rd were more likely to have not survived. 
    1st class had the most folks who survived. Then again, it could be 
    possible that since 3rd class had the most passengers, this is why 
    most of them did not survive. That could be true. However, do remember
    from above, there was more men than women on the titanic, yet there 
    were significantly more women who survived than men
    '''

    # Finally, what what the age distribution of the passengers
    sns.countplot(x='age', data=df)
    plt.show()
    '''
    Looks like age 24 had the largest number of passengers on the titanic. Interesting
    '''

    # Since there are entries that are blank or have no entries, we need to clean up those records
    print('[*] Checking for null entries ... \n {}'.format(df.isnull()))
    '''
    The results show that we have entries which are null
    '''

    #Let's now see exactly which columns have null values and their count
    print('[*] Count of columns with null values \n{}'.format(df.isnull().sum()))
    '''
    From the results returned, age, cabin and embarked consists of null values
    '''

    #for those columns with null values, let's fill them out with dummy values
    #Let's drop the cabin column, since it has so many null values
    df.drop(['cabin'], axis=1, inplace=True)

    #clean up nan entries
    df.dropna(inplace=True)

    #Let's see if we have any null entries again. They should be all gone
    print('[*] Count of columns with null values \n{}'.format(df.isnull().sum()))
    '''
    Very nice! the results from this shows that all the null values have 
    been cleaned up. Nice clean dataset
    '''

    '''Still have to wrangle some of this data. We have to get the strings value to be numerical
    For example string value exists for names, sex, ticket and embarked. It seems we have to convert
    this to categorical variables in order to implement logistic regression.
    Basically, we need to ensure no strings are present as we implement machine language
    We will thus use pandas to help us out here with creating dummy variables
    '''

    #Currently 3 classes, let's simplfy this via binary
    pClass = pd.get_dummies(df['pclass'], drop_first=True)
    print('[*] Here is what the class currently looks like \n{}'.format(pClass))
    
    # Let's get the sex to a binary value: True or false for male or female
    male_female = pd.get_dummies(df['sex'], drop_first=True)
    print('[*] Here is what the sex column currently looks like \n{}'.format(male_female))
    
    # Since embarked consists of one of 3 categories, we can do wat we did to te clas here
    embark = pd.get_dummies(df['embarked'], drop_first=True)
    print('[*] Here is what the embark column currently looks like \n{}'.format(embark))

    #Let's now add these new columns into our existing dataset
    df = pd.concat([df,pClass,male_female,embark], axis=1)
    print('*] Our data now looks like \n {}'.format(df.head()))
    
    
    #Now that we have the new columns added, we need to remove the previous values and any other irrelevant columns
    df.drop(['pclass', 'sex', 'name', 'embarked', 'ticket'], axis=1, inplace=True)
    print('[*] Our finalized dataset:\n{}'.format(df.head()))


    '''
    Let's now look at splitting our data into our training set and test set
    specifically we would like to test if someone survives the titanic
    '''
    # first our features/independent variales. Use everything other than the y column
    X = df.drop('survived', axis=1)
    
    #our y axis / depedent variable. The value we would like to predict
    y = df['survived']

    #Train and split our dataset. 70% for training and 30% for testing
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=1)

    
    lr = LogisticRegression(verbose=True, solver='lbfgs')
    lr.fit(X_train, y_train)

    my_prediction = lr.predict(X_test)
    # Time for a prediction on the test data
    print('[*] Prediction on survial based on test data:\n{}'.format(my_prediction))

    #Time to test the accuracy of our model
    print('[*] Our Classification report \n {}'.format(classification_report(y_test, my_prediction)))
    

    '''Looking at the accuracy from the perspective of confusion matrix
     To learn more about confusion matrix see:
     https://en.wikipedia.org/wiki/Confusion_matrix
     https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/
    '''
    print('[*] Results from Confusion Matrix:\n{}'.format(confusion_matrix(y_test,my_prediction)))

    # Now let's calculate accuracy the easy way
    print('[*] Accuracy of the model is:{}'.format(accuracy_score(y_test,my_prediction)))


if __name__ == '__main__':
    main()

References:
https://www.youtube.com/watch?v=GwIo3gDZCVQ&list=PL9ooVrP1hQOHUfd-g8GUpKI3hHOwM_9Dn&index=1
https://stackoverflow.com/questions/46623583/seaborn-countplot-order-categories-by-count

Beginning Machine Learning - Linear Regression

This post is the second part of my journey to learn machine learning. Hopefully I'm improving along the way :-). Feel free to add your comments on what I should do differently.

#!/usr/bin/env python3

'''
    This code is based on me learning more about Linear Regression 
    This is part of me expanding my knowledge on machine learning

    This version of the code uses the sickit learn

    Author: Nik Alleyne
    blog: www.securitynik.com
    filename: linearRegresAlgo_v3.py

'''


import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split



def main():
    print('*[*] Beginning Linear regresion ...')
    
    # Reading Data - This file was downloaded fro GitHub. 
    # See the reference section for the URL
    df = pd.read_csv('./headbrain.csv',sep=',', dtype='int64', verbose=True)

    
    print('[*] First 10 records \n {}' .format(df.head(10)))
    print('[*] Quick description of the dataframe: \n{}'.format(df.describe()))
    print('[*] {} rows, columns '.format(df.shape))

    #Let's now create the X and Y axis using 
    X = np.array(df['Head Size(cm^3)'].values).reshape(-1, 1)
    Y = np.array(df['Brain Weight(grams)'].values)
    
    #Split the dataset into training and testing sets
    X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.30, random_state=10)

    lr = LinearRegression()
    lr.fit(X_train, y_train)
    print('[*] When X is 4234 the predicted value of y is{}'.format(lr.predict([[4234]])))
    
    r_sqr_score = lr.score(X, Y)
    print('[*] The R2 score is {}'.format(r_sqr_score))



if __name__ == '__main__':
    main()


The output from the above code is as follow:


root@securitynik:~/ML# ./linearRegresAlgo_v3.py 
*[*] Beginning Linear regresion ...
Tokenization took: 0.06 ms
Type conversion took: 0.21 ms
Parser memory cleanup took: 0.00 ms
[*] First 10 records 
    Gender  Age Range  Head Size(cm^3)  Brain Weight(grams)
0       1          1             4512                 1530
1       1          1             3738                 1297
2       1          1             4261                 1335
3       1          1             3777                 1282
4       1          1             4177                 1590
5       1          1             3585                 1300
6       1          1             3785                 1400
7       1          1             3559                 1255
8       1          1             3613                 1355
9       1          1             3982                 1375
[*] Quick description of the dataframe: 
           Gender   Age Range  Head Size(cm^3)  Brain Weight(grams)
count  237.000000  237.000000       237.000000           237.000000
mean     1.434599    1.535865      3633.991561          1282.873418
std      0.496753    0.499768       365.261422           120.340446
min      1.000000    1.000000      2720.000000           955.000000
25%      1.000000    1.000000      3389.000000          1207.000000
50%      1.000000    2.000000      3614.000000          1280.000000
75%      2.000000    2.000000      3876.000000          1350.000000
max      2.000000    2.000000      4747.000000          1635.000000
[*] (237, 4) rows, columns 
[*] When X is 4234 the predicted value of y is[1441.04828161]
[*] The R2 score is 0.6388174521966088



Beginning Machine Learning - Reubilding the Linear Regression Algorithm

Over the last few months, I've been caught up with expanding my knowledge on machine learning. As a result, these next few posts are all about me documenting my learning. As stated in many of my previous posts, this is all about making it easier for me to be able to refresh my memory in the future.

While there have been many great tutorials online that I've used, this one is mostly from the "Machine Learning Full Course - Learn Machine Learning 10 Hours | Machine Learning Tutorial | Edureka" on YouTube. Some of the other sites I've used are also within the references:

This post I'm rebuilding the Linear Regression algorithm and in the next post we use Sickit learn's Linear Regression


#!/usr/bin/env python3

'''
    This code is based on me learning more about Linear Regression 
    This is part of me expanding my knowledge on machine learning
    In this version I'm rebuilding the algorithm 

    Author: Nik Alleyne
    blog: www.securitynik.com
    filename: linearRegresAlgo_v2.py

'''


import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = (20.0, 10.0)



def main():
    print('*[*] Beginning Linear regresion ...')
    
    # Reading Data - This file was downloaded fro GitHub. 
    # See the reference section for the URL
    df = pd.read_csv('./headbrain.csv',sep=',', dtype='int64', verbose=True)

    #Gather information on the shape of the datset
    print('[*] {} rows, columns in the training dataset'.format(df.shape))
    print('[*] First 10 records of the training dataset')
    print(df.head(10))

    #Let's now create the X and Y axis using 
    X = df['Head Size(cm^3)'].values
    Y = df['Brain Weight(grams)'].values

    #Find the mean of X and Y
    mean_x = np.mean(X)
    mean_y = np.mean(Y)
    print('[*] The mean of X is {} || The mean of Y is {} '.format(mean_x, mean_y))
    
    # Calculating the coefficients
    # See formula here https://support.minitab.com/en-us/minitab-express/1/help-and-how-to/modeling-statistics/regression/how-to/multiple-regression/methods-and-formulas/methods-and-formulas/#coefficient-coef`
    numerator = 0
    denominator = 0

    for i in range(len(X)):
        numerator += ((X[i] - mean_x) * (Y[i] - mean_y))
        denominator += (X[i] - mean_x) ** 2
    b1 = numerator / denominator
    b0 = mean_y - (b1 * mean_x)
    print('[*] Coefficients:-> Brain Weight (b1): {} || Head size (b0): {}'.format(b1, b0))

    # When compared to the equation y = mx+c, we can say m = b1 & c = b0

    # create the graph
    max_x = np.max(X) + 100
    min_x = np.min(X) - 100

    # Calculating line values x and y
    x = np.linspace(min_x, max_x, 1000)
    y = b0 + b1 * x

    #plotting the line
    plt.plot(x,y, color='r', label='Regression Line')
    plt.scatter(X, Y, c='b', label='Scatter Plot')

    plt.xlabel('Head Size(cm^3)')
    plt.ylabel('Brain Weight(grams)')
    plt.legend()
    plt.show()

    # Let's now use the R2 method to determine how good the model is
    # Formula can be found here
    # https://support.minitab.com/en-us/minitab-express/1/help-and-how-to/modeling-statistics/regression/how-to/multiple-regression/methods-and-formulas/methods-and-formulas/#coefficient-coef
    ss_total = 0
    ss_error = 0

    for i in range(len(X)):
        y_pred = b0 + b1 * X[i]
        ss_total += (Y[i] - mean_y) ** 2
        ss_error += (Y[i] - y_pred) ** 2
    r_sq = 1 - (ss_error/ss_total)
    print('[*] Your R2 squared value is: {}'.format(r_sq))




if __name__ == '__main__':
    main()


When we run the code, we get:



root@securitynik:~/ML# ./linearRegresAlgo_v2.py | more
*[*] Beginning Linear regresion ...
Tokenization took: 0.06 ms
Type conversion took: 0.23 ms
Parser memory cleanup took: 0.00 ms
[*] (237, 4) rows, columns in the training dataset
[*] First 10 records of the training dataset
   Gender  Age Range  Head Size(cm^3)  Brain Weight(grams)
0       1          1             4512                 1530
1       1          1             3738                 1297
2       1          1             4261                 1335
3       1          1             3777                 1282
4       1          1             4177                 1590
5       1          1             3585                 1300
6       1          1             3785                 1400
7       1          1             3559                 1255
8       1          1             3613                 1355
9       1          1             3982                 1375
[*] The mean of X is 3633.9915611814345 || The mean of Y is 1282.873417721519
[*] Coefficients:-> Brain Weight (b1): 0.26342933948939945 || Head size (b0): 325.57342104944223
[*] Your R2 squared value is: 0.6393117199570003



That's it, my first shot at machine learning. Next post we use Sickit Learn rather than build the algorithm ourselves.


References:
https://www.youtube.com/watch?v=GwIo3gDZCVQ&list=PL9ooVrP1hQOHUfd-g8GUpKI3hHOwM_9Dn&index=1
https://matplotlib.org/3.1.1/tutorials/introductory/customizing.html#sphx-glr-tutorials-introductory-customizing-py
Headbrain.csv 
read_csv
Calculating Coefficient
R2

Tuesday, October 22, 2019

Monitoring IBM QRadar Persistent Folder


Recently on at least two occasions, I encountered a problem whereby the IBM QRadar Persistent Storage Folder '/store/persistent_queue/ecs-ec-ingress.ecs-ec-ingress/' fills up and causes a backlog of events. This means, events shown on the "Log Activity" tabs have a date in the past, even though the log sources are sending their logs in real time.

To monitor this folder, the following script was developed, which helps to detect this issue sooner rather than later. This script is configured to execute as a crond job.

#!/usr/bin/env python

# This script monitors the QRadar Presistent Storage Folder
# QRadarMonitorPersistentQueue.py
# This tool was designed to monitor the QRadar Persistent Storage Folder because of all the heartaches it is causing
# Currently there is no simpler way to monitor this folder for backblog

__author__ = 'Nik Alleyne'
__author_blog__ = 'www.securitynik.com'
__copyright__ = 'SecuityNik'
__credits__ = 'Nik Alleyne'
__version__ = '1.0.1'
__email__ = 'nikalleyne at gmail dot com'
__status__ = 'Production Ready'


import datetime
import os
import sys
import subprocess as sp
from socket import gethostname as hostname
import smtplib
import email.utils
from email.mime.text import MIMEText as mt


# Verify Qradar and OS version
def verify_os_qradar_version():
 # Clear the screen
 sp.call(['clear'])
 if ( sys.platform == 'linux2' ):
  print('[*] Running on Linux ...')
  
  print('[*] Checking QRadar Version ...')
  qradar_version = sp.check_output(['/opt/qradar/bin/getHostVersion.sh']).split('\n')[2]

  if (qradar_version.startswith('qradar')):
   print('[*] Found QRadar Version:{} '.format(qradar_version.split('=')[1]))
  else:
   print('[!] Unable to determine QRadar Version!')

 else:
  print('[*] Not running on Linux. Exiting ...')
  sys.exit(0)


# Cheeck the directory size 
def check_directory(num_files, dir_size):
 # Declare variables
 persistent_directory_path = '/store/persistent_queue/ecs-ec-ingress.ecs-ec-ingress/'
 max_files = 10 #10 files
 max_dir_size = 10000000000 #10GB
 current_dir_size = 0


 # Check if the directory path exists
 print('[*] Checking for directory {} ...'.format(persistent_directory_path))
 if ( (os.path.exists) and ( os.path.isdir(persistent_directory_path)) ):
  print('\t[*] Directory found ...')

  # Get the directory size
  current_dir_size = sp.check_output(['du', '--bytes', '/store/persistent_queue/ecs-ec-ingress.ecs-ec-ingress/']).split('\t')[0] 
  
  # Check whether their are more than 10 files or the directory is larger than 10GB
  if (( len(os.listdir(persistent_directory_path)) < max_files ) or ( current_dir_size < max_dir_size )) :   
   print('\t[*] Current number of file is: {} '.format(len(os.listdir(persistent_directory_path))))
   print('\t[*] Directory size is: {} Bytes '.format(current_dir_size))
   print('[***] System looks ok! [***]:-)')
  else:
   print('\t[!] Current number of files in the directory: {} '.format(len(os.listdir(persistent_directory_path))))
   print('\t[!] Current Directory size is: {} Bytes '.format(current_dir_size))
   print('[!!!] Possible issue with the persistent queue!')
 else:
  print('[!] Error! Directory not found!')

 return len(os.listdir(persistent_directory_path)), current_dir_size


# Setup and send email
def mailer(num_files, dir_size):
 print('[*] In Mailer!!')

 #You will have to formal the email below properly
 send_to = ['Nik Alleyne <nik alleyne at gmail dot com>']
 send_from = email.utils.formataddr(('IBM QRadar' , 'IBMQRadar@securitynik.local'))
 
 qradar_version = sp.check_output(['/opt/qradar/bin/getHostVersion.sh']).split('\n')[2]


 # Read Qradar Device Hostname
 msg_body = '[*] Running on host: {} \n' .format(hostname())
 msg_body = msg_body + '\n[*] Current QRadar Version: {} \n'.format(qradar_version.split('=')[1])
 msg_body = msg_body + '\n[*] Persistent Folder: /store/persistent_queue/ecs-ec-ingress.ecs-ec-ingress/ \n'
 msg_body = msg_body + '\n[*] Current Status as of {} \n'.format(datetime.datetime.now())
 msg_body = msg_body + '\n[*] Current Number of files: {} \n'.format(num_files)
 msg_body = msg_body + '\n[*] Current Directory Size in Bytes:{}B \n'.format(dir_size)
 msg_body = msg_body + '\n[*] Current Directory Size in MBs: {}M'.format(int(dir_size)/1000000)
 msg_body = msg_body + '\n[*] Current Directory Size in Gigs: {}G'.format(int(dir_size)/1000000000) 
 msg_body = msg_body + '\n\n ***Powered By Sirius Computer Solutions *** \n\n'
 
 print('[*] Preparing to send mail ... ')
 msg = mt(msg_body)
 msg['To'] = ','.join(send_to)
 msg['From'] = send_from
 
 if ( (num_files < 10 ) and (int(dir_size) < 10000000000 ) ):
  msg['Subject'] = '[**] {} :: INFORMATIONAL - Monitoring of Persistent Queue [**]'.format(hostname())
 else:
  msg['Subject'] = '[!!] {} :: POTENTIAL PROBLEM - Persistent Queue is growing [!!]'.format(hostname())

 send_mail = smtplib.SMTP('localhost')

 try:
      # once again, you will have to properly format the email    
   send_mail.sendmail(send_from, 'nikalleyne at gmail dot com' , msg.as_string())
   print('[*] Mail sent successfully!')
 except:
  print('[!] Ooops! Looks like an error occurred while sending the mail.') 

 send_mail.quit()
 

# main function
def main():
 print('[*] In Main!')
 verify_os_qradar_version()
 num_files,dir_size = check_directory(0,0)
 mailer(num_files,dir_size)


if __name__ == '__main__':
 main()


Here is an example of the output from the email once the script runs successfully


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
Subject: [**] qradar.securitynik.local :: INFORMATIONAL - Monitoring of Persistent Queue [**]

[*] Running on host: qradar.securitynik.local
[*] Current QRadar Version: "7.3.2"
[*] Persistent Folder: /store/persistent_queue/ecs-ec-ingress.ecs-ec-ingress/
[*] Current Status as of 2019-09-27 16:35:26.033240
[*] Current Number of files: 3
[*] Current Directory Size in Bytes:104962873B
[*] Current Directory Size in MBs: 104M
[*] Current Directory Size in Gigs: 0G

 *** Powered By SecurityNik ***

Hope someone else who is having this problem can find this script useful.

Saturday, August 24, 2019

Maximizing the SIEM - Moving beyond compliance by addressing risk

Authors: Nik Alleyne & Jide Ajomale

Working for a managed security services provider (MSSP) puts us in the position to be able to see how and why organizations implement many of their security tools. One of the major learning observed through our work, collaboration with colleagues through different organizations and communication with security folks in general, is that many organizations implement these technologies in many cases to satisfy a compliance requirement. While this is not necessarily bad, we should also look to maximize the investment made in these technologies once the compliance requirement has been met. This then means we now have to figure out how we move beyond compliance to risk management.

In this post specifically, the focus is on the Security Information and Event Management (SIEM). How do we maximize the SIEM once compliance has been met? Our perspective, once an organization has satisfied its compliance requirements, its next strategic move should be primarily risk based. Security is a game of cat and mouse, i.e. attackers vs defenders. With that said, it is extremely difficult for us defenders and or organizations to defend and or protect all of our assets equally. This is further compounded by the fact that we live in a world in which we must assume a breach will occur. The most recent significant breach related to MasterCard, shows that be it a (mis)configuration issue or a vulnerability, compromises will occur. What matters is how soon we detect them. Thus, it is imperative the SIEM is seen beyond simply a device implemented to satisfy a compliance requirement and more of a device that we use as part of our risk management strategy.

Once the organization has achieved its compliance objectives and are now looking at the risks, the first question which it should answer is what are our High Value Assets (HVA). Once the organization has a clear understanding of its High Valued Assets, it should ensure these are effectively monitored via the SIEM. The reality is, the organization’s High Value Assets are more than likely the threat actor’s High Value Targets (HVT).

Important to note, the Threat Actors are mostly after the data you have. They may also use this as a means to attack your infrastructure and may even use your infrastructure as a means to attack other organizations. These actions not only disrupt your business operations but also that of your partners and or other organizations involved. One of the most important breaches which can be used to reinforce this point, is the compromise of Target Co which was initiated from a compromised HVAC provider (krebsonsecurity.com, 2015) . At this point some of the organization’s more HVAs (in no specific order) may be as follows:

1. Internet facing e-commerce servers
2. Devices providing authentication services (Active Directory, etc)
3. Device acting as guards :-) (Firewalls, proxy, router, etc)
4. Endpoint security tools, etc.
5. Critical databases
6. Custom applications
7. etc ...

The organization must be clear in its ability to answer the question as to why these assets are important to it. Just simply categorizing all assets as high valued does the organization, its security team and the investment made into the SIEM tool(s) no good. Two key perspectives the organization can use to determine whether or not these assets are truly high valued, can be determined by looking at the impact on the company’s reputation, its brand and or its financial statements.

Now that the organization has clear understanding and a decision has been made on those High Value assets, its next step is to identify the risk associated with those assets. There are different formulas to calculate risk. The organization should choose one it feels most comfortable with. For our purpose we will follow the OWASP Risk Rating Methodology:

Risk = Likelihood * Impact

From the organization’s HVAs, the next step should be to prioritize those assets based on their likelihood of a possible compromise and the impact to the organization if one of these devices were to be compromised. As you think of likelihood, consider a situation where a host is vulnerable, it is exposed to the internet and an exploit is available. From our perspective and experience, there is a high likelihood that this host would be compromised. The question may simply be how long it takes before it is truly compromised.

Now that we understand the likelihood, let's look at the impact. Let's assume on a scale of 1-10, there is a high likelihood that this host will be compromised. If it is successfully compromised, what would the impact be? Can the business continue for a day, a few months or even years if this device was compromised? As in the device no longer maintains the confidentiality of its data, its users have lost confidence in its integrity and the device is no longer available to the organization. Basically, does it have any or all of the CIA (Confidentiality, Integrity and Availability) triad intact.

What about economic impact? Will this impact cost the organization thousands or millions or billions of dollars the longer the CIA triad is not intact? Are there regulatory, brand and or reputational impact which should be considered if this asset was compromised? More importantly, since this post is more from a SIEM perspective, will you be able to detect and investigate the incident with the ultimate aim of being able to answer the who, what, when, where, why and how relating to the breach/security incident. The logs (and packets/flows) put you in the best position to answer the question.

Considering the preceding and as stated above, once you have satisfied your compliance requirements, to maximize on your security technology investments, more specifically the SIEM in this case, you need to look at the risk associated with assets which runs the business. Once this is clear, conduct threat modelling exercises in other to identify supporting infrastructure or applications that could be leveraged to compromise these assets and implement logging for the devices with a high risk first. Ensure you make sound business and risk decisions as to whether your log successes and or failures, permits and or denies, allowed vs blocked etc. For more guidance on considerations you should have when logging, see Nik’s presentation on building a forensically capable network infrastructure (Alleyne, 2019) .
 

Hope this post helps you to look beyond compliance and instead look at the risk to your business as you look to maximize your investment in the SIEM.

Additional Readings
https://drive.google.com/file/d/14T6hjVZmsd_1iZmduLwLZto6-Kx4uLGF/view
https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology
https://krebsonsecurity.com/2014/02/target-hackers-broke-in-via-hvac-company/
http://www.securitywarriorconsulting.com/pdfs/chuvakin_RSA_2010_SEIMBC_WP_0810.pdf
https://www.cloudaccess.com/wp-content/uploads/2015/06/REACT-Moving-Beyond-SIEM.pdf
https://www.rsaconference.com/writable/presentations/file_upload/grc-w05-how_to_measure_anything_in_cybersecurity_risk.pdf

Thursday, May 2, 2019

Having Fun with CrackMapExec - Snort IDS/IPS Analysis

Now that we have the crackmapexec attack, logs analysis and packet analysis and Zeek analysis done, let's see what we can learn from Snort. I'm using the Snort community ruleset and the default configuration as of April 5, 2019. At the end of this post, hopefully you understand the importance of customizing your security tools to suit your environment.

root@securitynik:~# snort -A full -K ascii -l . -r cme-scan.pcap -c /etc/snort/snort.conf 

Commencing packet processing (pid=5478)
===============================================================================
Run time for packet processing was 1.4221 seconds
Snort processed 1570 packets.
Snort ran for 0 days 0 hours 0 minutes 1 seconds
   Pkts/sec:         1570
===============================================================================
Memory usage summary:
  Total non-mmapped bytes (arena):       45666304
  Bytes in mapped regions (hblkhd):      13574144
  Total allocated space (uordblks):      40400688
  Total free space (fordblks):           5265616
  Topmost releasable block (keepcost):   93600
===============================================================================
Packet I/O Totals:
   Received:         1570
   Analyzed:         1570 (100.000%)
    Dropped:            0 (  0.000%)
   Filtered:            0 (  0.000%)
Outstanding:            0 (  0.000%)
   Injected:            0
===============================================================================
Breakdown by protocol (includes rebuilt packets):
        Eth:         1572 (100.000%)
       VLAN:            0 (  0.000%)
        IP4:           51 (  3.244%)
       Frag:            0 (  0.000%)
       ICMP:            1 (  0.064%)
        UDP:            0 (  0.000%)
        TCP:           50 (  3.181%)
        IP6:            0 (  0.000%)
    IP6 Ext:            0 (  0.000%)
   IP6 Opts:            0 (  0.000%)
      Frag6:            0 (  0.000%)
      ICMP6:            0 (  0.000%)
       UDP6:            0 (  0.000%)
       TCP6:            0 (  0.000%)
     Teredo:            0 (  0.000%)
    ICMP-IP:            0 (  0.000%)
    IP4/IP4:            0 (  0.000%)
    IP4/IP6:            0 (  0.000%)
    IP6/IP4:            0 (  0.000%)
    IP6/IP6:            0 (  0.000%)
        GRE:            0 (  0.000%)
    GRE Eth:            0 (  0.000%)
   GRE VLAN:            0 (  0.000%)
    GRE IP4:            0 (  0.000%)
    GRE IP6:            0 (  0.000%)
GRE IP6 Ext:            0 (  0.000%)
   GRE PPTP:            0 (  0.000%)
    GRE ARP:            0 (  0.000%)
    GRE IPX:            0 (  0.000%)
   GRE Loop:            0 (  0.000%)
       MPLS:            0 (  0.000%)
        ARP:         1521 ( 96.756%)
        IPX:            0 (  0.000%)
   Eth Loop:            0 (  0.000%)
   Eth Disc:            0 (  0.000%)
   IP4 Disc:            0 (  0.000%)
   IP6 Disc:            0 (  0.000%)
   TCP Disc:            0 (  0.000%)
   UDP Disc:            0 (  0.000%)
  ICMP Disc:            0 (  0.000%)
All Discard:            0 (  0.000%)
      Other:            0 (  0.000%)
Bad Chk Sum:            0 (  0.000%)
    Bad TTL:            0 (  0.000%)
     S5 G 1:            0 (  0.000%)
     S5 G 2:            2 (  0.127%)
      Total:         1572
===============================================================================
Action Stats:
     Alerts:            1 (  0.064%)
     Logged:            1 (  0.064%)
     Passed:            0 (  0.000%)
Limits:
      Match:            0
      Queue:            0
        Log:            0
      Event:            0
      Alert:            0
Verdicts:
      Allow:         1570 (100.000%)
      Block:            0 (  0.000%)
    Replace:            0 (  0.000%)
  Whitelist:            0 (  0.000%)
  Blacklist:            0 (  0.000%)
     Ignore:            0 (  0.000%)
      Retry:            0 (  0.000%)
===============================================================================
Frag3 statistics:
        Total Fragments: 0
      Frags Reassembled: 0
               Discards: 0
          Memory Faults: 0
               Timeouts: 0
               Overlaps: 0
              Anomalies: 0
                 Alerts: 0
                  Drops: 0
     FragTrackers Added: 0
    FragTrackers Dumped: 0
FragTrackers Auto Freed: 0
    Frag Nodes Inserted: 0
     Frag Nodes Deleted: 0
===============================================================================
===============================================================================
Stream statistics:
            Total sessions: 4
              TCP sessions: 4
              UDP sessions: 0
             ICMP sessions: 0
               IP sessions: 0
                TCP Prunes: 0
                UDP Prunes: 0
               ICMP Prunes: 0
                 IP Prunes: 0
TCP StreamTrackers Created: 4
TCP StreamTrackers Deleted: 4
              TCP Timeouts: 0
              TCP Overlaps: 0
       TCP Segments Queued: 21
     TCP Segments Released: 21
       TCP Rebuilt Packets: 9
         TCP Segments Used: 16
              TCP Discards: 0
                  TCP Gaps: 1
      UDP Sessions Created: 0
      UDP Sessions Deleted: 0
              UDP Timeouts: 0
              UDP Discards: 0
                    Events: 0
           Internal Events: 0
           TCP Port Filter
                  Filtered: 0
                 Inspected: 0
                   Tracked: 48
           UDP Port Filter
                  Filtered: 0
                 Inspected: 0
                   Tracked: 0
===============================================================================
===============================================================================
SMTP Preprocessor Statistics
  Total sessions                                    : 0
  Max concurrent sessions                           : 0
===============================================================================
dcerpc2 Preprocessor Statistics
  Total sessions: 3
  Total sessions aborted: 3

  Transports
    SMB
      Total sessions: 3
      Packet stats
        Packets: 6
        Maximum outstanding requests: 1
        SMB command requests/responses processed
          Negotiate (0x72) : 3/0
===============================================================================
===============================================================================
SIP Preprocessor Statistics
  Total sessions: 0
===============================================================================
Snort exiting

Let's see if our "alert" file was created.


root@securitynik:~/cme# ls .
10.0.0.2  alert

Now that we know the "alert" file exists, let's see what type of alerts were created.


root@securitynik:~/cme# cat alert 
[**] [1:404:6] ICMP Destination Unreachable Protocol Unreachable [**]
[Classification: Misc activity] [Priority: 3] 
04/07-23:12:37.586514 10.0.0.2 -> 10.0.0.100
ICMP TTL:255 TOS:0x0 ID:24 IpLen:20 DgmLen:56
Type:3  Code:2  DESTINATION UNREACHABLE: PROTOCOL UNREACHABLE
** ORIGINAL DATAGRAM DUMP:
10.0.0.100:50838 -> 10.0.0.2:445
TCP TTL:64 TOS:0x0 ID:17710 IpLen:20 DgmLen:60 DF
Seq: 0x4C552250

From above it seems from Snort's perspective, the only thing it detected was a single ICMP destination unreachable protocol unreachable message.

When the file with the pcap file containing the share enumeration traffic was fed to snort, no alerts were generated.

root@securitynik:~/cme# snort -A console -K none -c /etc/snort/snort.conf -r cme-enum-shares.pcap 
....
===============================================================================
Action Stats:
     Alerts:            0 (  0.000%)
     Logged:            0 (  0.000%)
     Passed:            0 (  0.000%)
Limits:
      Match:            0
      Queue:            0
        Log:            0
      Event:            0
      Alert:            0
Verdicts:
      Allow:          487 (100.000%)
      Block:            0 (  0.000%)
    Replace:            0 (  0.000%)
  Whitelist:            0 (  0.000%)
  Blacklist:            0 (  0.000%)
     Ignore:            0 (  0.000%)
      Retry:            0 (  0.000%)
===============================================================================


As can be seen above, no alerts were created and 487 packets were allowed.


root@securitynik:~/cme# snort -A full -K ascii -l . -r cme-enum-users.pcap -c /etc/snort/snort.conf 

Looking at the user enumeration, we see no alerts were created yet again.


===============================================================================
Packet I/O Totals:
   Received:          438
   Analyzed:          438 (100.000%)
    Dropped:            0 (  0.000%)
   Filtered:            0 (  0.000%)
Outstanding:            0 (  0.000%)
   Injected:            0
===============================================================================
Breakdown by protocol (includes rebuilt packets):
        Eth:          442 (100.000%)
       VLAN:            0 (  0.000%)
        IP4:          437 ( 98.869%)
       Frag:            0 (  0.000%)
       ICMP:            0 (  0.000%)
        UDP:            0 (  0.000%)
        TCP:          437 ( 98.869%)
        IP6:            0 (  0.000%)
    IP6 Ext:            0 (  0.000%)
   IP6 Opts:            0 (  0.000%)
      Frag6:            0 (  0.000%)
      ICMP6:            0 (  0.000%)
       UDP6:            0 (  0.000%)
       TCP6:            0 (  0.000%)
     Teredo:            0 (  0.000%)
    ICMP-IP:            0 (  0.000%)
    IP4/IP4:            0 (  0.000%)
    IP4/IP6:            0 (  0.000%)
    IP6/IP4:            0 (  0.000%)
    IP6/IP6:            0 (  0.000%)
        GRE:            0 (  0.000%)
    GRE Eth:            0 (  0.000%)
   GRE VLAN:            0 (  0.000%)
    GRE IP4:            0 (  0.000%)
    GRE IP6:            0 (  0.000%)
GRE IP6 Ext:            0 (  0.000%)
   GRE PPTP:            0 (  0.000%)
    GRE ARP:            0 (  0.000%)
    GRE IPX:            0 (  0.000%)
   GRE Loop:            0 (  0.000%)
       MPLS:            0 (  0.000%)
        ARP:            5 (  1.131%)
        IPX:            0 (  0.000%)
   Eth Loop:            0 (  0.000%)
   Eth Disc:            0 (  0.000%)
   IP4 Disc:            0 (  0.000%)
   IP6 Disc:            0 (  0.000%)
   TCP Disc:            0 (  0.000%)
   UDP Disc:            0 (  0.000%)
  ICMP Disc:            0 (  0.000%)
All Discard:            0 (  0.000%)
      Other:            0 (  0.000%)
Bad Chk Sum:            0 (  0.000%)
    Bad TTL:            0 (  0.000%)
     S5 G 1:            0 (  0.000%)
     S5 G 2:            4 (  0.905%)
      Total:          442
===============================================================================
Action Stats:
     Alerts:            0 (  0.000%)
     Logged:            0 (  0.000%)
     Passed:            0 (  0.000%)
Limits:
      Match:            0
      Queue:            0
        Log:            0
      Event:            0
      Alert:            0
Verdicts:
      Allow:          438 (100.000%)
      Block:            0 (  0.000%)
    Replace:            0 (  0.000%)
  Whitelist:            0 (  0.000%)
  Blacklist:            0 (  0.000%)
     Ignore:            0 (  0.000%)
      Retry:            0 (  0.000%)
===============================================================================

Let's run Snort against this packet capture containing the packets for the policy and see if anything shows up.


root@securitynik:~/cme# snort -r cme-pass-pol.pcap -A console -K none -c /etc/snort/snort.conf 
.............
===============================================================================
Run time for packet processing was 0.3095 seconds
Snort processed 113 packets.
Snort ran for 0 days 0 hours 0 minutes 0 seconds
   Pkts/sec:          113
===============================================================================
....
===============================================================================
Packet I/O Totals:
   Received:          113
   Analyzed:          113 (100.000%)
    Dropped:            0 (  0.000%)
   Filtered:            0 (  0.000%)
Outstanding:            0 (  0.000%)
   Injected:            0
===============================================================================
....

===============================================================================
Action Stats:
     Alerts:            0 (  0.000%)
     Logged:            0 (  0.000%)
     Passed:            0 (  0.000%)
Limits:
      Match:            0
      Queue:            0
        Log:            0
      Event:            0
      Alert:            0
Verdicts:
      Allow:          113 (100.000%)
      Block:            0 (  0.000%)
    Replace:            0 (  0.000%)
  Whitelist:            0 (  0.000%)
  Blacklist:            0 (  0.000%)
     Ignore:            0 (  0.000%)
      Retry:            0 (  0.000%)
===============================================================================

Uh Oh! Once again, we have no visibility into the packets and thus snort produced no results.

Let's now see what the communication looks like when crackmapexec runs a powershell command.`


root@securitynik:~/cme# snort -A console -K none -c /etc/snort/snort.conf -r cme-powershell.pcap 

Commencing packet processing (pid=7651)
04/18-04:32:12.474140  [**] [1:1917:6] SCAN UPnP service discover attempt [**] [Classification: Detection of a Network Scan] [Priority: 3] {UDP} 10.0.0.3:56736 -> 239.255.255.250:1900
04/18-04:32:13.474803  [**] [1:1917:6] SCAN UPnP service discover attempt [**] [Classification: Detection of a Network Scan] [Priority: 3] {UDP} 10.0.0.3:56736 -> 239.255.255.250:1900
04/18-04:32:14.475259  [**] [1:1917:6] SCAN UPnP service discover attempt [**] [Classification: Detection of a Network Scan] [Priority: 3] {UDP} 10.0.0.3:56736 -> 239.255.255.250:1900
04/18-04:32:15.476480  [**] [1:1917:6] SCAN UPnP service discover attempt [**] [Classification: Detection of a Network Scan] [Priority: 3] {UDP} 10.0.0.3:56736 -> 239.255.255.250:1900
===============================================================================
Run time for packet processing was 1.751 seconds
Snort processed 247 packets.
Snort ran for 0 days 0 hours 0 minutes 1 seconds
   Pkts/sec:          247
===============================================================================
Packet I/O Totals:
   Received:          247
   Analyzed:          247 (100.000%)
    Dropped:            0 (  0.000%)
   Filtered:            0 (  0.000%)
Outstanding:            0 (  0.000%)
   Injected:            0
===============================================================================
Breakdown by protocol (includes rebuilt packets):
        Eth:          250 (100.000%)
       VLAN:            0 (  0.000%)
        IP4:          242 ( 96.800%)
....
        UDP:            4 (  1.600%)
        TCP:          238 ( 95.200%)
....
Bad Chk Sum:          127 ( 50.800%)
    Bad TTL:            0 (  0.000%)
     S5 G 1:            3 (  1.200%)
     S5 G 2:            0 (  0.000%)
      Total:          250
===============================================================================
Action Stats:
     Alerts:            4 (  1.600%)
     Logged:            4 (  1.600%)
     Passed:            0 (  0.000%)
Limits:
      Match:            0
      Queue:            0
        Log:            0
      Event:            0
      Alert:            0
Verdicts:
      Allow:          247 (100.000%)
      Block:            0 (  0.000%)
    Replace:            0 (  0.000%)
  Whitelist:            0 (  0.000%)
  Blacklist:            0 (  0.000%)
     Ignore:            0 (  0.000%)
      Retry:            0 (  0.000%)
===============================================================================

So it looks like above we got four alerts. However, these in no way reflect what our concerns are.

Let's move on now to running Snort against the final command. In this case, we will be running Snort against the pcap which contains our "ncat.exe" execution.


root@securitynik:~/cme# snort -A console -K none -r cme-ncat.pcap -c /etc/snort/snort.conf 
......
===============================================================================
Run time for packet processing was 1.2717 seconds
Snort processed 356 packets.
Snort ran for 0 days 0 hours 0 minutes 1 seconds
   Pkts/sec:          356
===============================================================================
Memory usage summary:
  Total non-mmapped bytes (arena):       44724224
  Bytes in mapped regions (hblkhd):      13574144
  Total allocated space (uordblks):      40400432
  Total free space (fordblks):           4323792
  Topmost releasable block (keepcost):   3680
===============================================================================
Packet I/O Totals:
   Received:          356
   Analyzed:          356 (100.000%)
    Dropped:            0 (  0.000%)
   Filtered:            0 (  0.000%)
Outstanding:            0 (  0.000%)
   Injected:            0
===============================================================================
Breakdown by protocol (includes rebuilt packets):
        Eth:          360 (100.000%)
       VLAN:            0 (  0.000%)
        IP4:          360 (100.000%)
       Frag:            0 (  0.000%)
       ICMP:            0 (  0.000%)
        UDP:            0 (  0.000%)
        TCP:          360 (100.000%)
        IP6:            0 (  0.000%)
    IP6 Ext:            0 (  0.000%)
   IP6 Opts:            0 (  0.000%)
      Frag6:            0 (  0.000%)
      ICMP6:            0 (  0.000%)
       UDP6:            0 (  0.000%)
       TCP6:            0 (  0.000%)
     Teredo:            0 (  0.000%)
    ICMP-IP:            0 (  0.000%)
    IP4/IP4:            0 (  0.000%)
    IP4/IP6:            0 (  0.000%)
    IP6/IP4:            0 (  0.000%)
    IP6/IP6:            0 (  0.000%)
        GRE:            0 (  0.000%)
    GRE Eth:            0 (  0.000%)
   GRE VLAN:            0 (  0.000%)
    GRE IP4:            0 (  0.000%)
    GRE IP6:            0 (  0.000%)
GRE IP6 Ext:            0 (  0.000%)
   GRE PPTP:            0 (  0.000%)
    GRE ARP:            0 (  0.000%)
    GRE IPX:            0 (  0.000%)
   GRE Loop:            0 (  0.000%)
       MPLS:            0 (  0.000%)
        ARP:            0 (  0.000%)
        IPX:            0 (  0.000%)
   Eth Loop:            0 (  0.000%)
   Eth Disc:            0 (  0.000%)
   IP4 Disc:            0 (  0.000%)
   IP6 Disc:            0 (  0.000%)
   TCP Disc:            0 (  0.000%)
   UDP Disc:            0 (  0.000%)
  ICMP Disc:            0 (  0.000%)
All Discard:            0 (  0.000%)
      Other:            0 (  0.000%)
Bad Chk Sum:          192 ( 53.333%)
    Bad TTL:            0 (  0.000%)
     S5 G 1:            3 (  0.833%)
     S5 G 2:            1 (  0.278%)
      Total:          360
===============================================================================
Action Stats:
     Alerts:            0 (  0.000%)
     Logged:            0 (  0.000%)
     Passed:            0 (  0.000%)
Limits:
      Match:            0
      Queue:            0
        Log:            0
      Event:            0
      Alert:            0
Verdicts:
      Allow:          356 (100.000%)
      Block:            0 (  0.000%)
    Replace:            0 (  0.000%)
  Whitelist:            0 (  0.000%)
  Blacklist:            0 (  0.000%)
     Ignore:            0 (  0.000%)
      Retry:            0 (  0.000%)
===============================================================================


Bummer!! Once again, we see there is no alerts.  Guess by default, we may have lots of blindspots.

Well let's wrap up this post here.

Important Note: One of the reasons why I used the default ruleset without any modification, as in enabling disabling any rule, is because I wanted to emphasize the importance of ensuring you configure and customize your security tools to your specific environment.This is true for all of your security tools which allow you the ability to customize for your unique environment.

References:
Snort

Posts in this series:
Having Fun with CrackMapExec
Having Fun with CrackMapExec - Log Analysis
Having Fun with CrackMapExec - Packet Analysis - CrackMapExec
Having Fun with CrackMapExec - Zeek (Bro) Analysis
Having Fun with CrackMapExec - Snort IDS/IPS Analysis