Skip to content

AdoreIt/LockerApp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LokerApp

Basic architecture

Installs

Conda environment

Create an environment and activate it

conda create --name p36_lockerapp python=3.6
conda activate p36_lockerapp

Flask and RestAPI

pip install flask
pip install flask-restful

RabbitMQ

conda install -c conda-forge rabbitmq-server
conda install -c conda-forge pika
conda install -c conda-forge colorlog

Postgresql

conda install -c anaconda psycopg2

HAProxy

conda install -c bkreider haproxy

Databases installation, configuration and creation

PostgreSQL

Install PostgreSQL on Ubuntu
sudo apt update
sudo apt-get install postgresql postgresql-contrib
# optionally:
sudo apt-get install libpq-dev postgresql-client postgresql-client-common
If you not using conda

Install python lib (can be succesfully executed only if PostgreSQL is already installed):

pip install psycopg2
Start PostgreSQL server

(Required before running the app):

sudo service postgresql start
Create Database
psql -f create_users_db.sql -U postgres \
&& psql -f create_users_table.sql -U postgres -d users_db
Drop Database
psql -f drop_users_db.sql -U postgres

MongoDB

Install MongoDB on Ubuntu if you don't use Conda
sudo apt update
sudo apt install -y mongodb
Install python lib
python -m pip install pymongo
Create database folders
cd mongo_db
mkdir db0 db1 db2
Create Database
python locker_service/mongo_db/migrations/create_lockers_db.py
Drop Database
python locker_service/mongo_db/migrations/drop_mongo_db.py`

RabbitQM Setup

To configure hostname, edit hosts

sudo vim /etc/hosts

Add to file:

xxx.xxx.xxx.xxx rabbit01
xxx.xxx.xxx.xxx rabbit02

where xxx.xxx.xxx.xxx is ip addresses where the nodes will be launched, rabbit01 and rabbit02 are node names.

Create file rabbitmq-env.conf with the next line:

rabbit01$ vim ~/anaconda3/envs/p36_lockerapp/etc/rabbitmq-env.conf

NODENAME=rabbit@rabbit01

rabbit02$ vim ~/anaconda3/envs/p36_lockerapp/etc/rabbitmq-env.conf

NODENAME=rabbit@rabbit02

rabbit01$ is one computer, rabbit02$ is another computer.

Start independent nodes: run rabbitmq-server on each computer:

rabbit01$ rabbitmq-server
rabbit02$ rabbitmq-server

This creates two independent RabbitMQ brokers, one on each node, as confirmed by the cluster_status command:

rabbit01$ rabbitmqctl cluster_status

Cluster status of node rabbit@rabbit01 ... [{nodes,[{disc,[rabbit@rabbit01]}]},{running_nodes,[rabbit@rabbit01]}] ...done.

rabbit02$ rabbitmqctl cluster_status

Cluster status of node rabbit@rabbit02 ... [{nodes,[{disc,[rabbit@rabbit02]}]},{running_nodes,[rabbit@rabbit02]}] ...done.

Create the cluster

rabbit02$ rabbitmqctl stop_app

Stopping node rabbit@rabbit02 ...done.

rabbit02$ rabbitmqctl join_cluster rabbit@rabbit01

Clustering node rabbit@rabbit02 with [rabbit@rabbit01] ...done.

rabbit02$ rabbitmqctl start_app

Starting node rabbit@rabbit02 ...done.

We can see that the two nodes are joined in a cluster by running the cluster_status command on either of the nodes:

rabbit01$ rabbitmqctl cluster_status

Cluster status of node rabbit@rabbit1 ... [{nodes,[{disc,[rabbit@rabbit01,rabbit@rabbit02]}]}, {running_nodes,[rabbit@rabbit02,rabbit@rabbit01]}] ...done.

Enable rabbitmq_management in every node

rabbitmq-plugins enable rabbitmq_management

Create admin user on one node, this admin user can be used for any node in cluster

rabbitmqctl add_user <user> <password>
rabbitmqctl set_user_tags <user> administrator
rabbitmqctl set_permissions -p / <user> ".*" ".*" ".*"

(or set permissions through management website xxx.xxx.xxx.xxx:15672)

Hazelcast

Download Hazelcast zip at https://hazelcast.org/download/

Unzip to LockerApp/locker_app

Launching

Update config/config.json with

  • ip and host information of services
  • credentials for RabbitMQ user

Launch HAProxy

(p36_lockerapp)$ haproxy -f config/haproxy.cfg

Launch LockerApp from two computers (don't forget to have different LockerApp ip in config/config.json — they are also written in config/haproxy.cfg):

(p36_lockerapp)$ python locker_app/app.py

Launch LockerService:

(p36_lockerapp)$ python locker_service/locker_service.py

Launch UserService:

(p36_lockerapp)$ python user_service/user_service.py

Launch RabbitMQ cluster (two nodes from different computers whose ip adresses are specified in /etc/hosts):

rabbit01$ rabbitmq-server

rabbit02$ rabbitmq-server

Launch Hazelcast on two computers:

./locker_app/hazelcast_locker_app/bin/stop.sh; ./locker_app/hazelcast_locker_app/bin/start.sh

Launch RabbitMQ receiver for LockerService:

(p36_lockerapp)$ python locker_service/rabbitmq_receive_from_user_service.py

Launch MongoDB with replication: run three separate mongodb servers

(p36_lockerapp)$ mongod --port 27017 --dbpath ./db0 --replSet lockers_rs
(p36_lockerapp)$ mongod --port 27018 --dbpath ./db1 --replSet lockers_rs
(p36_lockerapp)$ mongod --port 27019 --dbpath ./db2 --replSet lockers_rs

Initialize replica set

(p36_lockerapp)$ python locker_service/mongo_db/migrations/create_lockers_db.py

If you have troubles with creating replica set, run

(p36_lockerapp)$ mongod --port 27017 --dbpath ./db0 --replSet lockers_rs

In another terminal run mongo to open MongoDB terminal (when only one mongo node is active). Then run rs.init(), launch other nodes

(p36_lockerapp)$ mongod --port 27018 --dbpath ./db1 --replSet lockers_rs
(p36_lockerapp)$ mongod --port 27019 --dbpath ./db2 --replSet lockers_rs

and add these two members to the replica set by executing

(p36_lockerapp)$ rs.add("localhost:27018");
(p36_lockerapp)$ rs.add("localhost:27019");

Then run

(p36_lockerapp)$ python locker_service/mongo_db/migrations/create_lockers_db.py

without command

(p36_lockerapp)$ python locker_service/mongo_db/migrations/create_lockers_db.py

Launching with Guake terminal

From LockerApp folder:

# to run LockerApp, LockerService, 3 mongos, Hazelcast and RabbitMQ
./scripts/locker_app_run.sh

# to run LockerApp(second instance), UserService, Hazelcast and RabbitMQ
./scripts/user_service_run.sh