Build Apache Metron single node without Docker, Ansible – Part #2

Part #1 we built Metron 0.7.2 without docker, using mvn 3.6.3. On this post, we’ll install all thing to setup Apache Metron on single node to test SoC System.

Requirement software:

Ambari Server 2.5.x

MySQL 5.7: Guide to install MySQL 5.7 on Centos 7

STEP 1: Firstly, we pre-requisites for Ambari:

yum install git wget curl rpm tar unzip scp bzip2 wget createrepo yum-utils ntp python-pip psutils python-psutil ntp libffi-devel gcc openssl-devel -y
pip install --upgrade pip
pip install requests

STEP 2: After that, create a localrepo directory and copy the RPMs from Ambari node there:

mkdir /localrepo
cp -rp /root/metron/metron-deployment/packaging/docker/rpm-docker/RPMS/noarch/* /localrepo/
createrepo /localrepo

STEP 3: Fetch & create logrotate script for Hadoop Services:

wget -O /etc/logrotate.d/metron-ambari
sed -i 's/^  {{ hadoop_logrotate_frequency }}.*$/  daily/' /etc/logrotate.d/metron-ambari
sed -i 's/^  rotate {{ hadoop_logrotate_retention }}.*$/  rotate 30/' /etc/logrotate.d/metron-ambari
chmod 0644 /etc/logrotate.d/metron-ambari

Enable time sync, disable firewall and SElinux on every node (I know, but for the sake of simplicity, quickness & testing, I’ve disabled selinux):

systemctl enable ntpd
systemctl start ntpd
iptables -P INPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X
iptables-save > /etc/sysconfig/iptables
systemctl stop firewalld
systemctl disable firewalld
setenforce 0

Also, if you are using CentOS 7 and Python 2.7.5 and above you will encounter an error during ambari agent install in Ambari UI:

[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)

To fix it disable cert check in Python like this (reference link):

sed -i 's/verify=platform_default/verify=disable/' /etc/python/cert-verification.cfg

STEP 4: Download and setup Ambari repo (you may replace the “” with a newer Ambari version number):

wget -nv -O /etc/yum.repos.d/ambari.repo

Install and setup Ambari server:

yum install ambari-server -y
ambari-server setup -s

STEP 5: Add Metron service to Ambari by running mpack command (make sure to specify correct path to mpack in –mpack=):

ambari-server install-mpack --mpack=/root/metron/metron-deployment/packaging/ambari/metron-mpack/target/metron_mpack- --verbose

Start Ambari:

ambari-server start
  • Access the Ambari UI by going to the following URL in a web browser: http://<replace_with_master_node_ip>:8080/. You can use admin/admin as username/password. Start the Install Wizard.

Get Started page: Enter any desired cluster name.

Select Version: Make sure “Public Repository” is checked. You should also see the /localrepo directory listed.

Install Options: Specify hostnames of your nodes where Ambari cluster should be installed (all the ones you have specified in /etc/hosts) in “Target Hosts”. Copy content of the main node private key (/root/.ssh/id_rsa) in “Host Registration Information”. If you receive the warning “The following hostnames are not valid FQDNs”, ignore it and click OK.

Choose Services: Select following Services:

YARN + MapReduce2
Ambari Metrics



Zeppelin Notebook

Assign Masters: Assign “Kafka Broker” on all nodes. Make sure move following components on one common node (Taken from previous guide, is this still necessary?):

Storm UI Server
Metron Indexing
MySQL Server
Kibana Server
Elasticsearch Master
Metron Parsers
Metron Enrichment
Assign Slaves and Clients: select All for:


Customize Services: Following is a list of services that need to be configured:

  • Set the “NameNode Java heap size” (namenode_heapsize) from the default 1024 MB to at least 4096 MB under HDFS -> Configs.
  • For ElasticSearch:
    • Set “zen_discovery_ping_unicast_hosts” to the IP of the node where you assigned ElasticSearch Master on the Assign Master tab.
    • Under “Advanced elastic-site”: Change “network_host” to “”. Do not do this if your Metron is exposed to the public internet! Is “[ _local_, _site_ ]” now.
  • Kibana:
    • Set “kibana_es_url” to http://<replace_with_elasticsearch_master_hostname>:9200. “replace_with_elasticsearch_master_hostname” is the IP of the node where you assigned ElasticSearch Master on the Assign Master tab.
    • Change “kibana_default_application” to “dashboard/Metron-Dashboard”
  • Metron: Set “Elasticsearch Hosts” to the IP of the node where you assigned ElasticSearch Master on the Assign Master tab.
  • Storm: You might have to increase the number of “supervisor.slots.ports” from the default “[6700, 6701]” to “[6700, 6701, 6702, 6703, 6704]” if you’re only installing a single node.

For metron REST config database:

Metron JDBC client path: /usr/share/java/mysql-connector-java.jar
Metron JDBC Driver: com.mysql.jdbc.Driver
Metron JDBC password: <DB PASSWORD>
Metron JDBC platform: mysql
Metron JDBC URL: jdbc:mysql://<DB NAME>
Metron JDBC username: <DB USERNAME>

Set rest of the configuration values to recommended by Ambari or the ones you desire (like DB passwords) and perform install.

Install everything. Metron REST will probably not work as we still need to add a user and the database to MariaDB.
At this point, make sure that all the services are up. You might have to manually start a few.

Configure a user for Metron REST in MySQL. On the node where you installed the Metron REST UI, do:

# mysql -u root -p

There’s 1 last step we need to do before the metron REST service will run. Due to systemd in Centos 7, doing service metron-rest start <PASSWORD> no longer works. Instead we have to edit the configuration file “`/etc/rc.d/init.d/metron-rest`”. In this file, change `METRON_JDBC_PASSWORD=”$2″` to `METRON_JDBC_PASSWORD=”<DB PASSWORD>”` and restart the metron-rest service via the Ambari interface. Make sure that the Metron REST UI is started when moving to the following item.

Add the Metron REST username and password to the metronrest database:

# mysql -u <DB USERNAME> -p
> use <DB NAME>;
> insert into users (username, password, enabled) values ('<USERNAME>','<PASSWORD>',1);
> insert into authorities (username, authority) values ('<USERNAME>', 'ROLE_USER');
> quit

Make sure that all the services are up.

Install metron_pcapservice:

# cp /root/metron/metron-platform/metron-api/target/metron-api-0.4.1.jar /usr/metron/0.4.1/lib/
# wget -O /etc/init.d/pcapservice
# sed -i 's/{{ pcapservice_jar_dst }}/\/usr\/metron\/0.4.1\/lib\/metron-api-0.4.1.jar/' /etc/init.d/pcapservice
# sed -i 's/{{ pcapservice_port }}/8081/' /etc/init.d/pcapservice
# sed -i 's/{{ query_hdfs_path }}/\/tmp/' /etc/init.d/pcapservice
# sed -i 's/{{ pcap_hdfs_path }}/\/apps\/metron\/pcap/' /etc/init.d/pcapservice
# chmod 755 /etc/init.d/pcapservice
# wget -O /etc/logrotate.d/metron-pcapservice
# sed -i 's/^  {{ metron_pcapservice_logrotate_frequency }}.*$/  daily/' /etc/logrotate.d/metron-pcapservice
# sed -i 's/^  rotate {{ metron_pcapservice_logrotate_retention }}.*$/  rotate 30/' /etc/logrotate.d/metron-pcapservice
# chmod 644 /etc/logrotate.d/metron-pcapservice

Install tap interface:

# ip tuntap add tap0 mode tap

Bring up tap0 on

# ifconfig tap0 up
# ip link set tap0 promisc on

Install librdkafka:

# yum install cmake make gcc gcc-c++ flex bison libpcap libpcap-devel openssl-devel python-devel swig zlib-devel perlcyrus-sasl cyrus-sasl-devel cyrus-sasl-gssapi -y
# cd /tmp
# wget -O /tmp/librdkafka-0.9.4.tar.gz
# /bin/gtar --extract -C /tmp -z -f /tmp/librdkafka-0.9.4.tar.gz
# cd /tmp/librdkafka-0.9.4
# ./configure --prefix=/usr/local --enable-sasl
# make
# make install

Install pycapa (I don’t think we need the virtualenv anymore in CentOS 7, needs some further investigation):

# yum install @Development python-virtualenv libpcap-devel libselinux-python -y
# mkdir /usr/local/pycapa
# cd /usr/local/pycapa
# virtualenv pycapa-venv
# cp -r /root/metron/metron-sensors/pycapa/. /usr/local/pycapa/.
(# /usr/local/pycapa/pycapa-venv/bin/pip install -r requirements.txt)
# cd /usr/local/pycapa
# source pycapa-venv/bin/activate
# pip install -r requirements.txt
# pip install --upgrade pip
# /usr/local/pycapa/pycapa-venv/bin/python install
# wget -O /etc/init.d/pycapa
# sed -i 's/{{ pycapa_log }}/\/var\/log\/pycapa.log/' /etc/init.d/pycapa
# sed -i 's/{{ pycapa_home }}/\/usr\/local\/pycapa/' /etc/init.d/pycapa
# sed -i 's/{{ python27_home }}/\/opt\/rh\/python27\/root/' /etc/init.d/pycapa
# sed -i 's/{{ pycapa_bin }}/\/usr\/local\/pycapa\/pycapa-venv\/bin/' /etc/init.d/pycapa
# sed -i 's/--kafka {{ kafka_broker_url }}/--kafka-broker <IP:6667>/' /etc/init.d/pycapa
# sed -i 's/--topic {{ pycapa_topic }}/--kafka-topic pcap/' /etc/init.d/pycapa
# sed -i 's/{{ pycapa_sniff_interface }}/tap0/' /etc/init.d/pycapa
# sed -i 's/export LD_LIBRARY_PATH=\/opt\/rh\/python27\/root\/usr\/lib64/export LD_LIBRARY_PATH=\/usr\/local\/lib/' /etc/init.d/pycapa
# chmod 755 /etc/init.d/pycapa
# yum install @Development libdnet-devel rpm-build libpcap libpcap-devel pcre pcre-devel zlib zlib-devel glib2-devel -y
# yum install kafka -y

Install bro:

# wget -O /tmp/bro-2.4.1.tar.gz
# /bin/gtar --extract -C /tmp -z -f /tmp/bro-2.4.1.tar.gz
# cd /tmp/bro-2.4.1
# ./configure --prefix=/usr/local/bro
# make -j4
# make install

Configure bro:

# sed -i 's/interface=eth0/interface=tap0/' /usr/local/bro/etc/node.cfg
# /usr/local/bro/bin/broctl install

Edit crontab with # crontab -e and add:

0-59/5    *    *    *    *    /usr/local/bro/bin/broctl cron
0-59/5    *    *    *    *    rm -rf /usr/local/bro/spool/tmp/*


# cp -r /root/metron/metron-sensors/bro-plugin-kafka /tmp
# cd /tmp/bro-plugin-kafka
# rm -rf build/
# ./configure --bro-dist=/tmp/bro-2.4.1 --install-root=/usr/local/bro/lib/bro/plugins/ --with-librdkafka=/usr/local
# make -j4
# make install

Configure bro-kafka plugin:

# cat << EOF >> /usr/local/bro/share/bro/site/local.bro
@load Bro/Kafka/logs-to-kafka.bro
redef Kafka::logs_to_send = set(HTTP::LOG, DNS::LOG);
redef Kafka::topic_name = "bro";
redef Kafka::tag_json = T;
redef Kafka::kafka_conf = table( [""] = "<KAFKA_BROKER_IP>:6667" );
# /usr/local/bro/bin/broctl deploy
# ip link set tap0 promisc on

Install daq:

# wget -O /tmp/daq-2.0.6-1.src.rpm
# cd /tmp
# rpmbuild --rebuild daq-2.0.6-1.src.rpm

This last command creates the files /root/rpmbuild/RPMS/x86_64/daq-2.0.6-1.x86_64.rpm & /root/rpmbuild/RPMS/x86_64/daq-debuginfo-2.0.6-1.x86_64.rpm. We only need to install the first rpm.

# yum install /root/rpmbuild/RPMS/x86_64/daq-2.0.6-1.x86_64.rpm -y

Install snort:

# wget -O /tmp/snort-
# cd /tmp
# rpmbuild --rebuild snort-

This last command creates the files /root/rpmbuild/RPMS/x86_64/snort- & /root/rpmbuild/RPMS/x86_64/snort-debuginfo- We only need to install the first rpm.

# yum install /root/rpmbuild/RPMS/x86_64/snort- -y
# wget -O /tmp/community-rules.tar.gz
# /bin/gtar --extract -C /tmp -z -f /tmp/community-rules.tar.gz
# cp -r community-rules/community.rules /etc/snort/rules
# touch /etc/snort/rules/white_list.rules
# touch /etc/snort/rules/black_list.rules
# touch /var/log/snort/alerts
# chown -R snort:snort /etc/snort
# sed -i 's/^# alert/alert/' /etc/snort/rules/community.rules
# wget -O /tmp/snort.conf
# cp snort.conf /etc/snort/snort.conf
# sed -i 's/^ipvar HOME_NET.*$/ipvar HOME_NET any/' /etc/snort/snort.conf
# echo "output alert_csv: /var/log/snort/alert.csv default" >> /etc/snort/snort.conf
# sed -i 's/^ALERTMODE=.*$/ALERTMODE=/' /etc/sysconfig/snort
# sed -i 's/^NO_PACKET_LOG=.*$/NO_PACKET_LOG=1/' /etc/sysconfig/snort
# sed -i 's/^INTERFACE=.*$/INTERFACE=tap0/' /etc/sysconfig/snort
# mkdir /opt/snort-producer
# chmod 755 /opt/snort-producer
# wget -O /opt/snort-producer/
# sed -i 's/{{ snort_alert_csv_path }}/\/var\/log\/snort\/alert.csv/' /opt/snort-producer/
# sed -i 's/{{ kafka_prod }}/\/usr\/hdp\/current\/kafka-broker\/bin\/' /opt/snort-producer/
# sed -i 's/{{ kafka_broker_url }}/<KAFKA_BROKER_IP>:6667/' /opt/snort-producer/
# sed -i 's/{{ snort_topic }}/snort/' /opt/snort-producer/
# chmod 755 /opt/snort-producer/
# wget -O /etc/init.d/snort-producer
# sed -i 's/{{ snort_producer_home }}/\/opt\/snort-producer/' /etc/init.d/snort-producer
# sed -i 's/{{ snort_producer_start }}/\/opt\/snort-producer\/' /etc/init.d/snort-producer
# chmod 755 /etc/init.d/snort-producer

Install yaf:

# wget -O /tmp/libfixbuf-1.7.1.tar.gz
# /bin/gtar --extract -C /tmp -z -f /tmp/libfixbuf-1.7.1.tar.gz
# cd /tmp/libfixbuf-1.7.1
# ./configure
# make -j4
# make install
# wget -O /tmp/yaf-2.8.0.tar.gz
# /bin/gtar --extract -C /tmp -z -f /tmp/yaf-2.8.0.tar.gz
# cd /tmp/yaf-2.8.0
# ./configure --enable-applabel --enable-plugins
# make -j4
# make install
# mkdir /opt/yaf
# chmod 755 /opt/yaf
# wget -O /opt/yaf/
# sed -i 's/{{ yaf_bin }}/\/usr\/local\/bin\/yaf/' /opt/yaf/
# sed -i 's/{{ sniff_interface }}/tap0/' /opt/yaf/
# sed -i 's/{{ yafscii_bin }}/\/usr\/local\/bin\/yafscii/' /opt/yaf/
# sed -i 's/{{ kafka_prod }}/\/usr\/hdp\/current\/kafka-broker\/bin\/' /opt/yaf/
# sed -i 's/{{ kafka_broker_url }}/<BROKER_IP>:6667/' /opt/yaf/
# sed -i 's/{{ yaf_topic }}/yaf/' /opt/yaf/
# chmod 755 /opt/yaf/
# wget -O /etc/init.d/yaf
# sed -i 's/{{ yaf_home }}/\/opt\/yaf/' /etc/init.d/yaf
# sed -i 's/{{ yaf_start }}/\/opt\/yaf\/' /etc/init.d/yaf
# sed -i 's/^DAEMONOPTS=\"${@:2}\"$/DAEMONOPTS=\"${@:2} --idle-timeout 0\"/' /etc/init.d/yaf
# chmod 755 /etc/init.d/yaf

Install tcpreplay:

# wget -O /tmp/tcpreplay-4.1.1.tar.gz
# /bin/gtar --extract -C /opt -z  -f /tmp/tcpreplay-4.1.1.tar.gz
# cd /opt/tcpreplay-4.1.1/
# ./configure --prefix=/opt
# make -j4
# make install
# mkdir /opt/pcap-replay
# chown root.root /opt/pcap-replay
# chmod 755 /opt/pcap-replay
# cd /opt/pcap-replay
# wget
# echo "include \$RULE_PATH/test.rules" >> /etc/snort/snort.conf
# echo "alert tcp any any -> any any (msg:'snort test alert'; sid:999158; )" > /etc/snort/rules/test.rules
# wget -O /etc/init.d/pcap-replay
# sed -i 's/{{ pcap_replay_home }}/\/opt\/pcap-replay/' /etc/init.d/pcap-replay
# sed -i 's/{{ pcap_replay_interface }}/tap0/' /etc/init.d/pcap-replay
# sed -i 's/{{ tcpreplay_prefix }}/\/opt/' /etc/init.d/pcap-replay
# chmod 755 /etc/init.d/pcap-replay

Install monit

# yum install monit -y
# wget -O /etc/monitrc
# sed -i 's/{{ inventory_hostname }}/<IP ADDRESS>/' /etc/monitrc
# sed -i 's/{{ monit_user }}/admin/' /etc/monitrc
# sed -i 's/{{ monit_pass }}/monit/' /etc/monitrc
# chmod 600 /etc/monitrc
# wget -O /etc/monit.d/pcap-replay.monit
# chmod 644 /etc/monit.d/pcap-replay.monit
# wget -O /etc/monit.d/pcap-service.monit
# chmod 644 /etc/monit.d/pcap-service.monit
# wget -O /etc/monit.d/pycapa.monit
# chmod 644 /etc/monit.d/pycapa.monit
# wget -O /etc/monit.d/snort.monit
# chmod 644 /etc/monit.d/snort.monit
# wget -O /etc/monit.d/yaf.monit
# chmod 644 /etc/monit.d/yaf.monit
# wget -O /etc/monit.d/bro.monit
# sed -i 's/^  with pidfile.*$/  with pidfile \/usr\/local\/bro\/spool\/bro\/\.pid/' /etc/monit.d/bro.monit
# chmod 644 /etc/monit.d/bro.monit
# systemctl enable monit
# systemctl start monit
# systemctl status monit
# monit reload
# monit stop all
# monit start all
# monit summary | tail -n +3 | awk -F"'" '{print $2}'

Exposed Interfaces

In the end, you’ll end up with a bunch of exposed UIs:
– Ambari: http://node1:8080/
– Kibana: http://node1:5000/
– Sensor Status (monit): http://node1:2812
– Elasticsearch: http://node1:9200/
– Storm UI: http://node1:8744/
– Metron REST interface: http://node1:8082/swagger-ui.html#/
– Management UI: http://node1:4200/ (user/password)
– Apache Nifi: http://node1:8089/nifi/
– Zookeeper: http://node1:2181
– Kafka: http://node1:6667

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: