Thursday, April 28, 2022

Beginning MongoDB - MongoClient

To see the full notebook, check out my GitHub site.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
# %%
# Import the needed libraries
import requests
from pymongo import MongoClient

# %%
# Setup the connection to the mongodb server
mongodb_client = MongoClient('10.0.0.120', 27017)
mongodb_client

# %%
# Ping 
mongodb_client.admin.command('ping')

# %%
# List the databases on the server
mongodb_client.list_database_names()

# %%
# List all collections in the securitynik database
securitynik_db = mongodb_client['securitynik-mongo-db']
securitynik_db.list_collection_names()

# %%
# Get the db host information
securitynik_db.HOST

# %%
# Get the column names for the employees collections
securitynik_db.employees.find_one().keys()

# %%
# Create a collection wnamed employees within the employee_db
employees = securitynik_db.employees

# Create a collection named user
users = securitynik_db.users
securitynik_db, employees, users


# %%
employee = {
    'fname' : 'Nik',
    'lname' : 'Alleyne',
    'Active' : 'True',
    'profession' : 'Blogger'
}

# %%
# Add a record to the employees_db
employees.insert_one(employee)

# %%
# Read the employee collection and print the first record
employees.find_one()

# %%
insert_many = [
        {'user':'NA', 'blog':'www.blogspot.com', 'age':10, 'sex':'Female'}, 
        {'user':'NA', 'blog':'www.blogspot.com', 'age':20, 'sex':'Male'}, 
        {'user':'SA', 'blog':'www.blogspot.com', 'age':30, 'sex':'Female'}, 
        {'user':'TQ', 'blog':'www.blogspot.com', 'age':40, 'sex':'Male'}, 
        {'user':'DP', 'blog':'www.blogspot.com', 'age':50, 'sex':'Female'}, 
        {'user':'TA', 'blog':'www.blogspot.com', 'age':60, 'sex':'Male'}, 
        {'user':'PK', 'blog':'www.blogspot.com', 'age':70, 'sex':'Female'}, 
        {'user':'User-1', 'blog':'www.blogspot.com', 'age':80, 'sex':'Male'}, 
        {'user':'User-2', 'blog':'www.blogspot.com', 'age':90, 'sex':'Female'}, 
        {'user':'User-3', 'blog':'www.blogspot.com', 'age':1000, 'sex':'Male'}, 
        ]

# %%
# Insert the above records
employees.insert_many(insert_many)

# %%
# Leveraging the query operators
for document in securitynik_db.employees.find({}):
    print(document)

# %%
# Count the number of documents in the users and employees table
securitynik_db.user.count_documents({}), securitynik_db.employees.count_documents(filter={})

# %%
# Find one record
securitynik_db.employees.find_one(filter={})

# %%
# Count all records where the sex is male
securitynik_db.employees.count_documents({'sex':'Male'})

# %%
# Count all records where the sex is Female
securitynik_db.employees.count_documents({'sex':'Female'})

# %%
# Combining the filter to look for a more targeted record
securitynik_db.employees.count_documents({'sex':'Female', 'age':90, 'user':'User-2'})

# %%
# Find the record that matches the above criterion
securitynik_db.employees.find_one({'sex':'Female', 'age':90, 'user':'User-2'})

# %%
# Leveraging the query to find all records where the age is equal to 20
for document in securitynik_db.employees.find({ 'age' : { '$eq':20 } }):
    print(document)

# %%
# Leveraging the query to find all records where the age is less than 20
for document in securitynik_db.employees.find({ 'age' : { '$lt':20 } }):
    print(document)

# %%
# Leveraging the query to find all records where the age is greater than 20
for document in securitynik_db.employees.find({ 'age' : { '$gt':80 } }):
    print(document)

# %%
# Leveraging the query to find all records within an array
for document in securitynik_db.employees.find({ 'age' : { '$in':[90, 1000] } }):
    print(document)

# %%
# Leveraging the query to find all records not within an array
for document in securitynik_db.employees.find({ 'age' : { '$nin':[90, 1000] } }):
    print(document)

# %%
# Leveraging the query to find all records where the blogsOn field exists
for document in securitynik_db.employees.find({ 'blogsOn' : { '$exists': 'true' } }):
    print(document)

# %%
# Leveraging the query to find all records where the blogsOn field exists and has 2 or more entries
# in the blogsOn Field
for document in securitynik_db.employees.find({ 'blogsOn.1' : { '$exists': 'true' } }):
    print(document)

# %%
# finding all the unique values for the blogsOn field
securitynik_db.employees.distinct('blogsOn')

# %%
# finding all the unique values for the sex field
securitynik_db.employees.distinct('sex')

# %%
# finding all the unique values for the user field in the users collection
securitynik_db.user.distinct('user')

# %%
# Using distinct with regular expressions
# Looking for all records where the user starts with 
# Nak, Ney, Pam or contains the characters sadi and ends with Alleyne
# While ignorning case sensitivity
securitynik_db.user.distinct('user', { 'user' : {'$regex' : '^(Nak\w+|Ney|Pam|[sadi]).*alleyne$', '$options': 'i'} })

# %%
# Leveraging projections to find the fields of interest
# Then return only the columns of interest
# 1 means return the field, id is returned by default, 
# so using 0 to suppress it.
list(securitynik_db.employees.find(filter={}, projection={'_id':0, 'fname':1, 'lname':1}))

# %%
# Expanding on above to sort the results
sorted_docs = securitynik_db.employees.find({}, sort=[('age', -1)])
list(sorted_docs)

# %%
# Expanding on above to sort the results
sorted_docs = securitynik_db.employees.find({'fname':'Nik'}, sort=[('_id', -1)])
list(sorted_docs)

# %%
# Limiting the number of returned record
sorted_docs = securitynik_db.employees.find({'fname':'Nik'}, sort=[('_id', -1)], limit=1)
list(sorted_docs)

# %%
# Limiting the number of returned record while taking advantage of skip
sorted_docs = securitynik_db.employees.find({'fname':'Nik'}, sort=[('_id', -1)], skip=1, limit=1)
list(sorted_docs)

# %%
# Creating an index and leveraging same
securitynik_db.employees.create_index([('age', -1)])

# %%
# Leverage the index
# Creating an index and leveraging same
list(securitynik_db.employees.find({'age': -1}))

# %%
# Create a compound index
securitynik_db.employees.create_index([('age', -1), ('fname', -1)])

# %%
# Grabbing initial information on the index
securitynik_db.employees.index_information()

# %%
# Getting an explanation about my query
securitynik_db.employees.find({'fname':'Nik', '_id': '6263735f43142a78a4071445'}).explain()

# %%
# Using aggregate feature
# doing most of the work at the server
list(securitynik_db.employees.aggregate([{'$match' : { 'fname' : 'Nik'}}, {'$project' : {'lname' : 3, 'fname':1, '_id':0 }}, { '$limit' : 3}]))

# %%
# Using aggregate feature
# doing most of the work at the server
# count the number of returned records

list(securitynik_db.employees.aggregate([{'$match' : { 'fname' : 'Nik'}}, {'$project' : {'lname' : 3, 'fname':1, '_id':0 }}, { '$limit' : 3}, {'$count' : 'fname'}]))

# %%


# %%


# %%
''' 
References:
https://hevodata.com/learn/python-pymongo-mongoclient/
https://campus.datacamp.com/courses/introduction-to-using-mongodb-for-data-science-with-python/
https://www.linkedin.com/pulse/mongodb-101-beginners-part1-amany-mounes/
https://www.mongodb.com/docs/manual/tutorial/query-documents/#std-label-read-operations-query-argument
https://www.tutorialspoint.com/python_mongodb/python_mongodb_tutorial.pdf
https://www.bogotobogo.com/python/MongoDB_PyMongo/python_MongoDB_pyMongo_tutorial_connecting_accessing.php
https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.html
https://www.mongodb.com/docs/manual/reference/operator/query/
https://www.educba.com/mongodb-query-operators/
https://www.w3schools.in/mongodb/projection-queries

'''

Thursday, April 14, 2022

Beginning SQLalchemy

See this GitHub link for the full notebook. https://github.com/SecurityNik/Data-Science-and-ML/blob/main/beginning-sql-alchemy-blog.ipynb
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
#!/usr/bin/env python
# coding: utf-8

# In[1]:


'''
In this post, I am learning more about sqlalachemy
'''

# First up, import the sqlalchemy modules I will need need to use
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String, Text, text, select, or_, and_, desc, func, case, cast, Float, DECIMAL, Boolean, insert, update, delete, Date, DateTime, ARRAY, ForeignKey

# Import datetime
from datetime import datetime

# import pandas as I will use this to importand view data
import pandas as pd


# In[2]:


# Delete the database file if it previous existed
get_ipython().system('del /f securitynik-db.sqlite')


# In[3]:


'''
Create a SQLite database and interface to it via create_engine.
As this database does not exist as yet, it will be created on the disk
using the relative path. Hence the  ///
This engine does not actually connect to the database at this time.
A connection will be made once a request has been made to perform a task
'''
securitynik_db_engine = create_engine('sqlite:///securitynik-db.sqlite', echo=True)
print(securitynik_db_engine)

''' 
Setup the metadata
Quoting from the sqlalchemy manual: "The MetaData is a registry which includes the ability to emit a limited set of schema generation commands to
the database"
'''
metadata = MetaData()
print(metadata)


# In[4]:


'''
With the engine created. Time to make a connection to the database
Since the database securitynik-db.sqlite does not exist,
the file will be created on the file system
'''
securitynik_db_connection = securitynik_db_engine.connect()
securitynik_db_connection


# In[5]:


'''
Verifying the securitynik-db.sqlite file has been created on the file system
and that it is currently empty, as no data has been written to it
'''
get_ipython().system('dir securitynik-db.sqlite')


# In[6]:


# With the file now created. Time to create some tables

# Create an employee Table
employee_table = Table('employees', metadata,
            Column('EmployeeID', Integer(), primary_key=True, nullable=False, unique=True, autoincrement=True),
            Column('FName', String(255), nullable=True),
            Column('LName', String(255), nullable=True),
            Column('Active', Boolean(), default=True, nullable=False),
            Column('Comments', String(255), default='securitynik.com employee')
            )


# In[7]:


'''
Create a blogs table
Setup the blogger_id field to link back to the EmployeeID field in the employees table
Note, could have also used foreign_key(employee_table.columns.EmployeeID to setup the foreign key
'''

blogs_table = Table('blogs', metadata,
            Column('BlogID', Integer(), primary_key=True, nullable=False, unique=True, autoincrement=True),
            Column('blogger_id', Integer(), ForeignKey('employees.EmployeeID'), nullable=False),
            Column('BlogTitle', String(255), nullable=True),
            Column('Blogger', String(255), default='Nik Alleyne', nullable=False),
            Column('Date', DateTime(), nullable=datetime.now),
            Column('URL', String(255), nullable=True),
            Column('Comments', Text(), default='Blog post created by Nik Alleyne')
            )


# In[8]:


# Create a table other
other_table = Table('other', metadata,
            Column('ID', Integer(), primary_key=True, nullable=False, unique=True, autoincrement=True),
            Column('Comments', String(255),  nullable=True)
            )


# In[9]:


# Create all the above defined tables
metadata.create_all(securitynik_db_connection)


# In[10]:


# Verifying the tables were successfully created by viewing the metadata object
metadata.tables


# In[11]:


# Taking a different view of the tables via metadata
metadata.sorted_tables


# In[12]:


'''
With the tables created time to insert data
first into the employees table.
I will first insert 1 record
At the same time, return the number of rows impacted via the rowcount
'''
securitynik_db_connection.execute(insert(employee_table).values(FName='Nik', LName='Alleyne', Active=True, Comments='Blog Author')).rowcount


# In[13]:


'''
Add an entry to the blog table
'''
securitynik_db_connection.execute(insert(blogs_table).values(blogger_id=1, BlogTitle='Beginning SQLAlchemy', URL='http://www.securitynik.com/beginning-sql-alchemy.html')).rowcount


# In[14]:


# Insert some data into the other table
securitynik_db_connection.execute(insert(other_table).values(Comments='Nothing Exciting')).rowcount


# In[15]:


'''
Now that I can assign 1 value at a time
time to insert multiple values via a list of 
dictionaries
'''
add_multiple_employees = [
        { 'FName':'S', 'LName':'Alleyne',  'Active':True, 'Comments':'Blog Author' }, 
        { 'FName':'P', 'LName':'Khan',  'Active':False, 'Comments':'Blog Admin'},
        { 'FName':'TQ', 'LName':'G', 'Active':True, 'Comments':'Blog Manager'},
        { 'FName':'T', 'LName':'A', 'Active':False, 'Comments':'Blog Author' },
        { 'FName':'D', 'LName':'P', 'Active':True, 'Comments':'Blog Maintainer' },
        { 'FName':'J', 'LName':'S', 'Active':False, 'Comments':'Blog Contributor' },
        { 'FName':'C', 'LName':'P',  'Active':True, 'Comments':'Blog Comments Admin' },
        { 'FName':'A', 'LName':'W', 'Active':False, 'Comments':'Blog Author' },
]

# With the list of dictionaries built, time to submit to the database
# At the same time, get the number of rows impacted
securitynik_db_connection.execute(insert(employee_table, add_multiple_employees)).rowcount


# In[16]:


'''
Trying another strategy to get users into the database
In this case, read data from a CSV file and push int into the datbase
First read the csv file with pandas and print the first 5 records
'''
df_employees = pd.read_csv('employees.csv', header=0, sep=',')
df_employees.head(5)


# In[17]:


'''
With the dataframe now containing the CSV data
time to take the dataframe data and push it into the SQLite database
'''
df_employees.to_sql(name='employees', con=securitynik_db_connection, if_exists='append', index=False)


# In[18]:


'''
With no errors above, it looks like all is well
Using the same strategy to add new blog entries
'''
df_blogs = pd.read_csv('blogs.csv', header=0, sep=',')
df_blogs.head(5)


# In[19]:


'''
With the dataframe now containing the CSV data
time to take the dataframe data and push it into the SQLite database
'''
df_blogs.to_sql(name='blogs', con=securitynik_db_connection, if_exists='append', index=False)


# In[20]:


'''
 With the data added to the various coluimns
 Time to now query the various tables
 Select the first 5 records from the employees table
'''
result_proxy = securitynik_db_connection.execute(select(employee_table)).fetchall()
result_proxy


# In[21]:


# How many records are there in the Employees table
len(result_proxy)


# In[22]:


# Get a sample result Key
result_proxy[0].keys()


# In[23]:


# With the result key, iterate through the results
print('EmployeeID | FName     |    LName  |     Active     |  Comments      ')
for result in result_proxy:
    print(f'{result.EmployeeID} |    {result.FName}   | {result.LName}  | { result.Active }  | {result.Comments}')


# In[24]:


'''
Building on the query, adding a where clause
'''
securitynik_db_connection.execute(select(employee_table).where(employee_table.columns.LName=='Alleyne')).fetchmany(size=5)


# In[25]:


'''
Building on the above query, taking advantage of 'and_'
to compound the query.
Leveraging both .columns and .c 
'''
securitynik_db_connection.execute(select(employee_table).where(and_(
                                                    employee_table.columns.LName=='Alleyne',
                                                    employee_table.c.FName=='Nik',
                                                    employee_table.c.Active==True))).fetchone()


# In[26]:


'''
Taking advantage of 'or_'
to compound the query.
Leveraging both .columns and .c 
'''
securitynik_db_connection.execute(select(employee_table).where(or_(
                                                    employee_table.columns.LName=='Alleyne',
                                                    employee_table.c.FName=='Nik',
                                                    employee_table.c.Active==False))).fetchmany(size=5)


# In[27]:


'''
Looking at columns in the blog table 
identif all records where the URL field is null
'''
securitynik_db_connection.execute(select(blogs_table).where(blogs_table.columns.URL==None)).fetchmany(size=5)


# In[29]:


'''
Looking for all records where the URL is not NULL in the blogs table 
'''
securitynik_db_connection.execute(select(blogs_table).where(blogs_table.columns.URL!=None)).fetchmany(size=5)


# In[32]:


'''
Finding records using Like
Looking specifically for records where the name is like kibana
Note I am ignorning the case by using iLike
'''
securitynik_db_connection.execute(select(blogs_table).where(blogs_table.columns.BlogTitle.ilike('%Kibana%'))).fetchmany(size=5)


# In[46]:


'''
Revisiting the employee table 
ordering by Employee FName
Do it descending, as in going from Z to A rather than A to Z
Limit the results to 5 records
Only return the employee first and last name
'''
securitynik_db_connection.execute(select(employee_table.columns.FName, employee_table.c.LName).order_by(desc(employee_table.columns.FName)).limit(5)).fetchall()


# In[55]:


'''
Updating records where comments is empty in the blog table
'''

securitynik_db_connection.execute(update(blogs_table).where(blogs_table.c.Comments == None).values(Comments='SecurityNik is the blogger')).rowcount


# In[57]:


# Verifying the change was made on the blog table
securitynik_db_connection.execute(select(blogs_table.columns.Comments)).fetchall()


# In[60]:


# Delete the records we just created above
securitynik_db_connection.execute(delete(blogs_table).where(blogs_table.c.Comments =='SecurityNik is the blogger')).rowcount


# In[64]:


'''
Drop the other table
'''
#other_table.drop(securitynik_db_engine)


# In[ ]:


# Drop all tables
metadata.drop_all(securitynik_db_engine)


# In[28]:


'''
References:
https://campus.datacamp.com/courses/introduction-to-relational-databases-in-python
https://www.sqlalchemy.org/library.html
https://buildmedia.readthedocs.org/media/pdf/sqlalchemy/rel_1_0/sqlalchemy.pdf
https://www.tutorialspoint.com/sqlalchemy/sqlalchemy_quick_guide.htm
https://www.topcoder.com/thrive/articles/sqlalchemy-1-4-and-2-0-transitional-introduction
https://overiq.com/sqlalchemy-101/installing-sqlalchemy-and-connecting-to-database/


'''

Thursday, April 7, 2022

Installing & configuring Elasticsearch 8 and Kibana 8 on Ubuntu

In a previous post, we installed Elastic 7.1x. In this post, we are installing the new shiny toy from Elastic, Elastic 8.1

First up, install Elastic public signing key. 

securitynik@securitynik:~$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg

Install the apt-transport-https package

securitynik@securitynik:~$ sudo apt-get install apt-transport-https
...
Preparing to unpack .../apt-transport-https_2.0.6_all.deb ...
Unpacking apt-transport-https (2.0.6) ...
Setting up apt-transport-https (2.0.6) ..

Save the Elastic repo information

securitynik@securitynik:~$ echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main

Install Elasticsearch 8.1

securitynik@securitynik:~$ sudo apt-get update && sudo apt-get install elasticsearch

------------
The following NEW packages will be installed:
  elasticsearch
...
Preparing to unpack .../elasticsearch_8.1.1_amd64.deb ...
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Unpacking elasticsearch (8.1.1) ...
Setting up elasticsearch (8.1.1) ...
--------------------------- Security autoconfiguration information ------------------------------

Authentication and authorization are enabled.
TLS for the transport and HTTP layers is enabled and configured.

The generated password for the elastic built-in superuser is : Laqr4gkhwa-Do=Ctia15

If this node should join an existing cluster, you can reconfigure this with
'/usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token <token-here>'
after creating an enrollment token on your existing cluster.

You can complete the following actions at any time:

Reset the password of the elastic built-in superuser with
'/usr/share/elasticsearch/bin/elasticsearch-reset-password -u elastic'.

Generate an enrollment token for Kibana instances with
 '/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana'.

Generate an enrollment token for Elasticsearch nodes with
'/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node'.

-------------------------------------------------------------------------------------------------
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service
-----------

Make configuration change to customize this deployment for our environment. First make a backup copy of the configuration file.

securitynik@securitynik:~$ sudo cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.ORIGINAL

Here are the changes to my elasticsearch.yml. 

securitynik@securitynik:~$ sudo grep --invert-match "^#" /etc/elasticsearch/elasticsearch.yml
cluster.name: n3-elastic
node.name: securitynik.n3.local
node.attr.rack: ServerCloset
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: securitynik.local
http.port: 9200

xpack.security.enabled: true

xpack.security.enrollment.enabled: true

xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
cluster.initial_master_nodes: ["securitynik"]

http.host: [_local_, _site_]

Adjusting the Java Virtual Machine (JVM) Heap Size by first creating a file.

securitynik@securitynik:~$ sudo cat /etc/elasticsearch/jvm.options.d/jvm.options
-Xms16g
-Xmx16g

Add new host information to my host file, just in case DNS is not working.

securitynik@securitynik:~$ sudo bash -c "echo 10.0.0.4 peeping-tom peeping-tom.n3.local >> /etc/hosts"
securitynik@securitynik:~$ grep peeping-tom /etc/hosts
127.0.1.1 securitynik 10.0.0.4 securitynik securitynik.local

Make a copy of the CA and HTTP certs to /etc/ssl/certs, so that it is in a location easily readable by the rest of the applications.

securitynik@securitynik:~$ sudo cp /etc/elasticsearch/certs/http_ca.crt /etc/ssl/certs -v

Reload systemd daemon, enable and verify Elasticsearch service

securitynik@securitynik:~$ sudo /bin/systemctl enable elasticsearch.service
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /lib/systemd/system/elasticsearch.service.

securitynik@securitynik:~$ sudo systemctl start elasticsearch.service
securitynik@securitynik:~$ systemctl status elasticsearch.service
● elasticsearch.service - Elasticsearch
     Loaded: loaded (/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-03-30 19:53:03 EDT; 18s ago
       Docs: https://www.elastic.co
   Main PID: 76746 (java)
      Tasks: 80 (limit: 38298)
     Memory: 16.9G
     CGroup: /system.slice/elasticsearch.service
             ├─76746 /usr/share/elasticsearch/jdk/bin/java -Xshare:auto -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cac>
             └─77051 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Mar 30 19:52:47 securitynik systemd[1]: Starting Elasticsearch...
Mar 30 19:53:03 securitynik systemd[1]: Started Elasticsearch.
lines 1-13/13 (END)

Confirming the Elasticsearch ports are listening for incoming communication.

securitynik@securitynik:~$ sudo ss --numeric --listening --tcp --processes | grep --perl-regexp "9300|9200"
LISTEN   0        4096       [::ffff:10.0.0.4]:9200                 *:*       users:(("java",pid=76746,fd=386))

LISTEN   0        4096         [::ffff:127.0.0.1]:9200                 *:*       users:(("java",pid=76746,fd=385))

LISTEN   0        4096                      [::1]:9200              [::]:*       users:(("java",pid=76746,fd=384))

LISTEN   0        4096       [::ffff:10.0.0.4]:9300                 *:*       users:(("java",pid=76746,fd=382))

Connecting to the Elasticsearch service via https

securitynik@securitynik:~$ sudo curl https://10.0.0.4:9200 --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic
Enter host password for user 'elastic':
{
  "name" : "securitynik.n3.local",
  "cluster_name" : "n3-elastic",
  "cluster_uuid" : "KDh-JRfXQtuXjXo2hniQjg",
  "version" : {
    "number" : "8.1.1",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "d0925dd6f22e07b935750420a3155db6e5c58381",
    "build_date" : "2022-03-17T22:01:32.658689558Z",
    "build_snapshot" : false,
    "lucene_version" : "9.0.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

Good stuff! We have validated Elasticsearch is working as expected.

Installing Kibana on 

Considering all the heavy lifting was done above, time to install Kibana.

securitynik@securitynik:~$ sudo apt-get update && sudo apt-get install kibana
...
Preparing to unpack .../kibana_8.1.1_amd64.deb ...
Unpacking kibana (8.1.1) ...
Setting up kibana (8.1.1) ...
Creating kibana group... OK
Creating kibana user... OK
Created Kibana keystore in /etc/kibana/kibana.keystore

Make a copy of the Kibana configuration file.

securitynik@securitynik:~$ sudo cp /etc/kibana/kibana.yml /etc/kibana.yml.ORIGINAL

Generate a token

securitynik@securitynik:~$ sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token --scope kibana
eyM4XXIiOiI4LjEuMSIsImFkciI6WyIxOTIuMTY4LjAuNDo5MjAwIl0sImZnciI6ImUyMjRhMTkyMzkwMzE1MzM2MjM5MjFmMDMyYjZhOTVlMDcwZDY3Mzk2NGE0M2ZmOWQ5OWU5OTc3ZmI4NTI2YmYiLCJrZXkiOiI0c25pM1g4QmtmdzFwTU9VUDEyqapaOTg9DTJtSFNtLTFMSjVzX3g0ckZ3In0=

Generate encryption keys for SavedObjects, Reports, Dashboards, etc.

securitynik@securitynik:~$ sudo /usr/share/kibana/bin/kibana-encryption-keys generate
## Kibana Encryption Key Generation Utility

The 'generate' command guides you through the process of setting encryption keys for:

xpack.encryptedSavedObjects.encryptionKey
    Used to encrypt stored objects such as dashboards and visualizations
    https://www.elastic.co/guide/en/kibana/current/xpack-security-secure-saved-objects.html#xpack-security-secure-saved-objects

xpack.reporting.encryptionKey
    Used to encrypt saved reports
    https://www.elastic.co/guide/en/kibana/current/reporting-settings-kb.html#general-reporting-settings

xpack.security.encryptionKey
    Used to encrypt session information
    https://www.elastic.co/guide/en/kibana/current/security-settings-kb.html#security-session-and-cookie-settings


Already defined settings are ignored and can be regenerated using the --force flag.  Check the documentation links for instructions on how to rotate encryption keys.
Definitions should be set in the kibana.yml used configure Kibana.

Settings:
xpack.encryptedSavedObjects.encryptionKey: f4667a5634faf22053dbd40d91afa8b5
xpack.reporting.encryptionKey: f03f17de223aced044cd3afb42de3137
xpack.security.encryptionKey: f17be84bbaa17dc9cb8a06cb95e0d5be

Add the last 3 lines from above, to the kibana.yml file and start the Kibana service.

securitynik@securitynik:~$ sudo /bin/systemctl daemon-reload
securitynik@securitynik:~$ sudo /bin/systemctl enable kibana.service
securitynik@securitynik:~$ sudo systemctl start kibana.service

-------------
securitynik@securitynik:~$ sudo systemctl status kibana.service
● kibana.service - Kibana
     Loaded: loaded (/lib/systemd/system/kibana.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2022-03-30 22:56:14 EDT; 8s ago
       Docs: https://www.elastic.co
   Main PID: 102001 (node)
      Tasks: 11 (limit: 38298)
     Memory: 231.7M
     CGroup: /system.slice/kibana.service
             └─102001 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli/dist

Mar 30 22:56:14 securitynik systemd[1]: Started Kibana.
Mar 30 22:56:21 securitynik kibana[102001]: [2022-03-30T22:56:21.275-04:00][INFO ][plugins-service] Plugin "metricsEntities" is disabled.
Mar 30 22:56:21 securitynik kibana[102001]: [2022-03-30T22:56:21.345-04:00][INFO ][http.server.Preboot] http server running at http://192>
Mar 30 22:56:21 securitynik kibana[102001]: [2022-03-30T22:56:21.372-04:00][INFO ][plugins-system.preboot] Setting up [1] plugins: [inter>
Mar 30 22:56:21 securitynik kibana[102001]: [2022-03-30T22:56:21.374-04:00][INFO ][preboot] "interactiveSetup" plugin is holding setup: V>
Mar 30 22:56:21 securitynik kibana[102001]: [2022-03-30T22:56:21.399-04:00][INFO ][root] Holding setup until preboot stage is completed.
Mar 30 22:56:21 securitynik kibana[102001]: i Kibana has not been configured.
Mar 30 22:56:21 securitynik kibana[102001]: Go to http://10.0.0.4:5601/?code=452840 to get started.

Open the URL identified above in a browser and add the previously created token for Kibana.


Once the token is added, we should see below.


With the above completing successfully. Time to login to the UI, using the initially created user.



After all the changes, here is what my kibana.yml looks like

securitynik@securitynik:~$ sudo grep --perl-regexp --invert-match "^#" /etc/kibana/kibana.yml

server.host: "10.0.0.4"

server.publicBaseUrl: "http://10.0.0.4:5601"

server.name: "kibana.n3.local"

logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file

pid.file: /run/kibana/kibana.pid

xpack.encryptedSavedObjects.encryptionKey: d2667a5634faf33053dbd40d91afa8c9
xpack.reporting.encryptionKey: f03f17de223aced044cd3afb42de4398
xpack.security.encryptionKey: f17be84bbaa17dc9cb8a06cb95e0f437

elasticsearch.hosts: ['https://10.0.0.4:9200']
elasticsearch.serviceAccountToken: BBEAAWVsYXN0aWMva2liYW5hL3Vucm9sbC1wcm9jZXNzLM2va2VuLTE2NDg2OTU1ODcwMDg6OVlHSWhfaFlRQXVzMFhVcWZqSTdNZw
elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1648695587748.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://10.0.0.4:9200'], ca_trusted_fingerprint: e224a19239031533623921f032b6a06e070d673964a43ff9d99e9977fb8526bd}]