Kubernetes Migration Case Study

Migrating Netdelta From Docker to Kubernetes

In latish 2020, I moved Netdelta from a Docker deployment to Kubernetes, partly to see what all this Kubernetes jazz is about, and partly to investigate whether it would help me with the management of Netdelta containers for different punters, each of whom has their own docker container and Apache listening service.

I studiously went through the Kubernetes quick tutorial and found i had to investigate the documentation some more. Even then some aspects weren’t covered so well. This post explains what i did to deploy an app into Kubernetes, and some of the gotchas i encountered along the way, that were not covered so well in the Kubernetes documentation, and I summarise with a view of Kubernetes and give my view on: is the hype justified? Will I continue to host Netdelta in Kubernetes?

This is not a Kubernetes tutorial – it does assume some prior exposure on behalf of the reader, but nonethless links to the relevant documentation when some Kubernetes concepts are covered.

Contents

Netdelta in Docker

This post isn’t about Netdelta, but for illustrative purposes: Netdelta aids with the detection of unauthorised changes, and hacker shells, by running one-off port scans, or scheduled jobs, comparing the results with the previous scan, and alerting on changes. This is more chunky than it sounds, mostly because of the analytics that goes into false positives detection. In the Kubernetes implementation, scan results are held in a stateful persistent volume with MySQL.

Netdelta’s docker config can be dug into here, but to summarise the docker setup:

  • Database container – MySQL 5.7
  • Application container – Apache, Django 3.1.4, Celery 5.0.5, Netdelta
  • Fileserver (logs, virtualenv, code deployment)
  • Docker volumes and networking are utilised

Data Flows / Networking

The data flows aspect reflects what is not exactly a bare metal deployment. A Linode-hosted VM running Ubuntu 20 is the host, then the Kubernetes node is minikube, with another node running on a Raspberry pi 3 – the latter aspect not being a production facility. The pi 3 was only to test how well the config would work with load balancing, and Kubernetes Replicasets across nodes.

Reverse Proxy

Ingress connections from the internet are handled first by nginx acting as a reverse proxy. Base URLs for Netdelta are of the form https://www.netdelta.io/<site>. The nginx config …

server {
    listen 80;
    location /barbican {
	proxy_set_header Accept-Encoding "";
	sub_filter_types text/html text/css text/xml;
	sub_filter $host $host/barbican;
        proxy_pass http://local.netdelta.io/barbican;
    }
}

K8s Ingress Controller

This is passing a URL with a first level of <site> to be processed at local.netdelta.io, which is locally resolvable, and is localhost. This is where the nginx Kubernetes Ingress Controller comes into play. The pods in kubernetes have NodePorts configured but these aren’t necessary. The nginx ingress controller takes connections on port 80, and routes based on service names and the defined listening port:

┌──(iantibble㉿bionic)-[~]
└─$ kubectl describe ingress
Name:             netdelta-ingress
Namespace:        default
Address:          172.17.0.2
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host               Path  Backends
  ----               ----  --------
  local.netdelta.io
                     /barbican   netdelta-barbican:9004 (<none>)
Annotations:         <none>
Events:              <none>

The YAML looks thusly:

┌──(iantibble㉿bionic)-[~/netdd/k8s]
└─$ cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: netdelta-ingress
spec:
  rules:
    - host: local.netdelta.io
      http:
        paths:
          - path: /barbican
            backend:
              service:
                name: netdelta-barbican
                port:
                  number: 9004
            pathType: Prefix

So the nginx ingress controllers sees the connection forwarded from local.netdelta.io with a URL request of local.netdelta.io/<site>. The requests matches a rule, and forwards to the Kubernetes Service of the same name. The entity that actually answers the call is a docker container masquerading as a Kubernetes Pod, which is part of a deployment. The next step in the data flow is to route the connection to the specified Kubernetes Service which is covered briefly here but in more detail later in the coverage of DNS.

The “service” aspect has the effect of exposing the pod according to the service setup:

┌──(iantibble㉿bionic)-[~/netdd/k8s]
└─$ kubectl get services -o wide
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE    SELECTOR
kubernetes          ClusterIP   10.96.0.1                443/TCP          119d   
mysql-netdelta      ClusterIP   10.97.140.111            3306/TCP         39d    app=mysql-netdelta
netdelta-barbican   NodePort    10.103.160.223           9004:30460/TCP   36d    app=netdelta-barbican
netdelta-xynexis    NodePort    10.102.53.156            9005:31259/TCP   36d    app

DNS

There’s an awful lot of waffle out there about DNS and Kubernetes. Basically – and I know the god of devops won’t let me in heaven for saying this, but making a service in Kubernetes leads to DNS being enabled. DNS in a multi-namespace, multi-node scenario becomes more intreresting of course, and there’s plenty you can configure that’s outside the scope of this article.

Netdelta’s Django settings.py defines a host and database name, and has to be able to find the host:

DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'netdelta-SITENAME', # Not used with sqlite3.
'USER': 'root', # Not used with sqlite3.
'HOST': mysql-netdelta,
'PASSWORD': 'NOYFB',
'OPTIONS': dict(init_command="SET sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER'"),
}
}

This aspect was poorly documented and was far from obvious: the spec.selector field of the service should match the spec.template.metadata.labels of the pod created by the Deployment.

The Application Hosting in Kubernetes

Referring back to the diagram above, there are pods for each Netdelta site. How was the Docker-hosted version of Netdelta represented in Kubernetes?

The Deployment YAML:

apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: netdelta-barbican name: netdelta-barbican spec: replicas: 1 selector: matchLabels: app: netdelta-barbican strategy: type: Recreate template: metadata: creationTimestamp: null labels: app: netdelta-barbican spec: containers: - image: registry.netdelta.io/netdelta/barbican:1.0 imagePullPolicy: IfNotPresent name: netdelta-barbican ports: - containerPort: 9004 args: - "barbican" - "9004" - "le" - "certs" resources: {} volumeMounts: - mountPath: /srv/staging name: netdelta-app - mountPath: /srv/logs name: netdelta-logs - mountPath: /le name: le - mountPath: /var/lib/mysql name: data - mountPath: /srv/netdelta_venv name: netdelta-venv imagePullSecrets: - name: regcred volumes: - name: netdelta-app persistentVolumeClaim: claimName: netdelta-app - name: netdelta-logs persistentVolumeClaim: claimName: netdelta-logs - name: le persistentVolumeClaim: claimName: le - name: data persistentVolumeClaim: claimName: data - name: netdelta-venv persistentVolumeClaim: claimName: netdelta-venv restartPolicy: Always serviceAccountName: "" status: {}

Running:

kubectl apply -f netdelta-app-<site>.yaml

Has the effect of creating a pod and a container for the Django application, celery and Apache stack:

┌──(iantibble㉿bionic)-[~] └─$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE fileserver 1/1 1 1 25d mysql-netdelta 1/1 1 1 25d netdelta-barbican 1/1 1 1 25d ┌──(iantibble㉿bionic)-[~] └─$ kubectl get pods NAME READY STATUS RESTARTS AGE fileserver-6d6bc54f6c-hq8lk 1/1 Running 2 25d mysql-netdelta-5fd7757c66-xqp2j 1/1 Running 2 25d netdelta-barbican-68d78c58bd-vnqdn 1/1 Running 2 25d

K8s Equivakent of Docker Entrypoint Script Parameters

Some other points perhaps worthy of mention were around the Docker v Kubernetes aspects. My docker run command for the netdelta application container was like this:

docker run -it -p 9004:9004 --network netdelta_net --name netdelta_barbican -v netdelta_app:/srv/staging -v netdelta_logs:/srv/logs -v data:/data -v le:/etc/letsencrypt netdelta/barbican:core barbican 9004 le certs

So there’s 4 parameters for the entryscript: site, port, le, and cert. The last two are about letsencrypt certs which won’t be covered here. These are represented in the Kubernetes Deployment YAML in spec.template.spec.containers.args.

Private Image Repository

spec.template.spec.containers.image is set to registry.netdelta.io/netdelta/<site>:<version tag>. Yes, that’s right folks, i’m using a private registry, which is a lot of fun until you realise how hard it is to manage the images there. The setup and management of the private registry won’t be covered here but i found this to be useful.

One other point is about security and encryption in transit for the image pushes and pulls. I’ve been in security for 20 years and have lots of unrestricted penetration testing experience. It shouldn’t be necessary or mandatory to use HTTPS over HTTP in most cases. Admittedly i didn’t spend long trying, but i could not find a way to just use good old clear-text port 80 over 443, which in turn meant i had to configure a SSL certifcate with all the management around it, where the risks are far from justifying such a measure.

PV Mounts

In Dockerland I was using Docker Volumes for persistent storage of logs and application data. I was also using it for the application codebase, and any updates would be sync’d with containers by docker exec wrapped in a BASH script.

There was nothing unexpected in the deployment of the PVCs/PVs, but a couple of points are worth mentioning:

  • PV Filesystem mounts: Netdelta container deployment involves a custom image from COPY (Docker command) of files from a local source to the image. Then the container is run and the application can find the required files. The problem i ran into was about having filesystems mounted over the directories where my application container expected to find files. This meant i had to change my container entryscript to sync with the image when the Pod is deployed, whereas previously the directories were built-out from the docker image build.
  • /tmp as default PV files location: if you SSH to the node (minikube container in my case), you will find the mounted filesystems under /tmp. /tmp is a critical directory for the good health of any Linux-based system and it needs to be 777 (i.e. read and writeable by unauthenticated users and processes) with a sticky bit. This is one that for whatever reason doesn’t find its way into security checklists for Kubernetes but it really does warrant some attention. This can be changed by customising Kubernetes Storage Classes. There’s one pointer here.

Database and Fileserver

The MySQL Database service was deployed as a custom built container with my Docker setup. There was no special reason for this other than to change filesystem permissions, and the fact that the listening service needed to be “exposed” and the database config changed to bind to 0.0.0.0 instead of localhost. What i found with the Kubernetes Pod was that I didn’t need to change the Mysql config at all and spec.ports.targetport had the effect of “exposing” the listening service for the database.

The main reason for using a fileserver in the Dcoker deployment of Netdelta was to act as a container buffer between Docker Volumes and application containers. My my Unix hat on, one is left wondering how filesystem persmissions will work (or otherwise) with file read and writes across network mounted disparate unix systems, where even if the same account names exist on each system, perhaps they have different UIDs (BSD-derived systems use the UID to define ownership, not the name on the account). Moreover it was advised as a best practice measure in the Docker documentation to use an intermediate fileserver. Accordingly this was the way i decided to go with Kubernetes, with a “sidecar” Pod as a fileserver, which mounts the PVs onto the required mount points.

To K8s Or Not To K8s?

When you think about the way that e.g. Minikube is deployed – its a docker container. If you run a docker ps -a, you can see all the mechanics at work. And then if you SSH to the minikube, you can do another docker ps -a, and you see everything to do with Kubernetes pods and containers in the output. This seems like a mess, and if it isn’t, it will do until the mess actually arrives.

Furthermore, you don’t even want to look at the routing tables or network interfaces on the node host. You just cannot unsee that.

There is some considerable complexity here. Further, when you read the documentation for Kubernetes, it does have all the air of documentation written by programmers. We hear a lot about the lack of IT-skilled people, but what is even more lacking, are strategic thinkers (e.g. * [wildcard] Architects) who translate top level business design requirements into programming tactical requirements.

Knowing how Kubernetes works should be enough to know whether it’s really going to be beneficial or not to host your containers there. If you’re not sure you need it, then you probably don’t. In the case of Netdelta, if i have lots and lots of Netdelta sites to manage then i can go with Kubernetes, and now that i have seen Netdelta happily running in Kubernetes with both scheduled celery jobs and manual user-initiated scans, the transition will be a smooth one. In the meantime, I can work with Docker containers alone, with the supporting BASH scripts, whuch are here if you’re interested.

Netdelta – Install and Configure

Netdelta is a tool for monitoring networks and flagging alerts upon changes in advertised services. Now – I like Python and especially Django, and around 2014 or so i was asked to setup a facility for monitoring for changes in that organisation’s perimeter. After some considerable digging, i found nada, as in nothing, apart from a few half-baked student projects. So i went off and coded Netdelta, and the world has never been the same since.

I guess when i started with Netdelta i didn’t see it as a solution that would be widely popular because i was under the impression you could just do some basic shell scripting with nmap, and ndiff is specifically designed for delta flagging. However what became apparent at an early stage was that timeouts are a problem. A delta will be flagged when a host or service times out – and this happens a lot, even on a gigabit LAN, and it happens even more in public clouds. I built some analytics into Netdelta that looks back over the scan history and data and makes a call on the likelihood of a false positive (red, amber green).

Most organisations i worked with would benefit from this. One classic example i can think of – a trading house that had been on an aggressive M&A spree, maybe it was Black Friday or…? Anyway – they fired some network engineers and hired some new and cheaper ones, exacerbating what was already a poorly managed perimeter scenario. CISO wanted to know what was going off with these Internet facing subnets – enter Netdelta. Unauthorised changes are a problem! I am directly aware of no fewer than 6 incidents that occurred as a result of exposed SSH, SMB (Wannacry), and more recently RDP, and indirectly aware of many more.

Anyway without further waffle, here’s how you get Netdelta up and running. Warning – there are a few moving parts, but if someone wants it in Docker, let me know.

I always go with Ubuntu. The differences between Linux distros are like the differences between mueslis. My build was on 18.04 but its highly likely 19 variants will be just fine.

apt-get update
apt-get install curl nmap apache2 python3 python3-pip python3-venv rabbitmq-server mysql-server libapache2-mod-wsgi-py3 git
apt-get -y install software-properties-common
add-apt-repository ppa:certbot/certbot
apt-get install -y python-certbot-apache
Clone the repository from github into your <netdelta root> 
git clone https://github.com/SevenStones/netdelta.git

Filesystem

Create the user that will own <netdelta root>

useradd -s /bin/bash -d /home/<user> -m <user>

Create the directory that will host the Netdelta Django project if necessary

Add the user to a suitable group, and strip world permissions from netdelta directories

 groupadd <group> 
 usermod -G  <group> <user>
 usermod -G  <group> www-data
 chown -R www-data:<group> /var/www
 chown -R <user>:<group> <netdelta root> 

Make the logs dir, e.g. /logs, and you will need to modify /nd/netdelta_logger.py to point to this location. Note the celery monitor logs go to /var/log/celery/celery-monitor.log …which of course you can change.

Strip world permissions from all netdelta and apache root dirs:

chmod -Rv o-rwx <web root>
chmod -Rv o-rwx <netdelta root>

Virtualenv

The required packages are in requirements.txt, in the root of the git repo. Your virtualenv build with Python 3 goes approximately like this …

python3 -m venv /path/to/new/virtual/environment

You activate thusly: source </path/to/new/virtual/environment/>/bin/activate

Then suck in the requirements as root, remembering to fix permissions after you do this.

pip3 install wheel
pip3 install -r <netdelta root>/requirements.txt

You can use whatever supported database you like. MySQL is assumed here.

The Python framework mysqlclient was used with earlier versions of Django and MySQL. but with Python 3 and later Django versions, the word on the street is PyMySQL is the way to go. With this though, it took some trickery to get the Django project up and running; in the form of init.py for the project (<netdelta root>/netdelta/.init.py) and adding a few lines …

import pymysql 
pymysql.install_as_MySQLdb()

While in virtualenv, and under your netdelta root, add a superuser for the DF

Patch libnmap

Two main mods to the libnmap in usage with Netdelta were necessary. First, with later versions of Celery (>3.1), there was a security issue with “deamonic processes are not allowed to have children”, for which an alternative fork of libnmap fixed the problem. Then we needed to return to Netdelta the process id of the running nmap port scanner process.

cd /opt
git clone https://github.com/pyoner/python-libnmap.git
cp ./python-libnmap/libnmap/process.py <virtualenv root>/lib/python3.x/site-packages/libnmap/

then patch libnmap to allow Netdelta to kill scanning processes

<netdelta_root>/scripts/fix-libnmap.bash

Change the environment variables to match your install and use the virtualenv name as a parameter

Database Setup

Create a database called netdetla and use whichever encoding snd collation you like.

CREATE DATABASE netdelta CHARACTER SET utf8 COLLATE utf8_general_ci;

Then from <netdelta root> with virtualenv engaged:

python manage.py makemigrations nd
python manage.py migrate

Web Server

I am assuming all you good security pros don’t want to use the development server? Well as you’re only dealing with port scan data then….your call. I’m assuming Apache as a production web server.

You will need to give Apache a stub web root and enable the wsgi module. For the latter i added this to apache2.conf – this gives you some control over the exact version of Python loaded.

LoadModule wsgi_module "/usr/lib/python3.7/site-packages/mod_wsgi/server/mod_wsgi-py37.cpython-37m-i386-linux-gnu.so"
WSGIPythonHome "/usr"

Under the DocumentRoot line in your apache config file, give the pointers for WSGI.

WSGIDaemonProcess <site> python-home=<virtualenv root> python-path=<netdelta root>  WSGIProcessGroup <site>  WSGIScriptAlias / <netdelta root>/netdelta/wsgi.py  Alias /static/ <netdelta root>/netdelta/

Note also you will need to adjust your wsgi.py under <netdelta root>/netdelta/ –

# Add the site-packages of the chosen virtualenv to work with site.addsitedir('<virtualenv root>/lib/python3.7/site-packages')

Celery

From the current shell
in …<virtualenv> ….under <netdelta root>

celery worker -E -A nd -n default -Q default --loglevel=info -B --logfile=<netdelta root>/logs/celery.log

Under systemd (you will almost certainly want to do this) with the root user. The script pointed to by systemd for

systemctl start celery

can have all the environment checking (this isn’t intended to be a tutorial in BASH scripting), but the core of it…

cd <netdelta root>
nohup $VIRTUALENV_DIR/bin/celery worker -E -A nd -n ${SITE} -Q ${SITE} --loglevel=info -B --logfile=${SITE_LOGS}/celery.log >/dev/null 2>&1 &

And then you can put together your own scripts for status, stop, restart.

The Art Of The Security Delta: NetDelta

In this article we’ll be answering the following questions:

  • What is NetDelta?
  • Why should i be monitoring for changes in my network(s)?
  • Challenges we faced along the way
  • Other ways of detecting deltas in networks
  • About NetDelta

What Is NetDelta?

NetDelta allows users to configure groups of IP addresses (by department, subnet, etc) and perform one-off or periodic port scans against the configured group, and have NetDelta send alerts when a change is detected.

There’s lot of shiny new stuff out there. APT-buster ™, Silver Bullet ™, etc. Its almost as though someone sits in a room and literally looks for combinations of words that haven’t been used yet and uses this as the driver for a new VC sponsored effort. “Ok ‘threat’ is the most commonly used buzzword currently. Has ‘threat’ been combined with ‘cyber’ and ‘buster’ yet?”, “No”, [hellllooow Benjamin]. The most positive spin we can place on this is that we got so excited about the future while ignoring the present.

These new products are seen as serving the modern day needs of information security, as though the old challenges, going back to day 0 in this sector, or “1998”, have been nailed. Well, how about the old stalwart of Vulnerability Management? The products do not “Manage” anything, they produce lists of vulnerability – this is “assessment”, not “management”. And the lists they produce are riddled with noise (false positives), and what’s worse is there’s a much bigger false negatives problem in that the tools do not cover whole swaths of corporate estates. Doesn’t sound like this is an area that is well served by open source or commercial offerings.

Why Do I Need To Monitor My Networks For Changes?

On the same theme of new products in infosec – how about firewalls (that’s almost as old as it gets)? Well we now have “next-gen” firewalls, but does that mean that old-gen firewalls were phased out, we solved the Network Access Control problem, and then moved on?

How about this: if there is a change in listening services, say in, for example – your perimeter DMZ (!), and you didn’t authorise it, that cannot be a good thing. Its one of either:

  • Hacker/malware activity, e.g. hacker’s connection service (e.g. root shell), or
  • Unauthorised change, e.g. networks ops changed firewall or DMZ host configuration outside of change control
  • You imagined it – perhaps lack of sleep or too much caffeine

Neither of these can be good. They can only be bad. And do we have a way to detect such issues currently?

How does NetDelta help us solve these problems?

Users can configure scans either on a one-off basis, or to be run periodically. So for example, as a user i can tell NetDelta to scan my DMZ perimeter every night at 2 AM and alert me by email if something changed from the previous night’s scan:

  • Previously unseen host comes online in a subnet, either as an unauthorised addition to the group, or unauthorised (rogue) firewall change or new host deployment (maybe an unsanctioned wifi access point or webcam, for example) – these concerns are becoming more valid in the age of Internet of Things (IoT) where devices are shipped with open telnets and so on.
  • Host goes offline – this could be something of interest from a service availability/DoS point of view, as well as the dreaded ‘unauthorised change’.
  • Change in the available services – e.g. hacker’s exploit is successful and manages to locally open a shell on an unfiltered higher port, or new service turned on outside of change control. NetDelta will alert if services are added or removed on a target host.

Host ‘state’ is maintained by NetDelta for as long as the retention period allows, and overall 10 status codes reflect the state of a host over successive periodic scans.

Challenges We Faced With NetDelta

The biggest and only major obstacle is the output of ‘noise’ that results from scan timeouts. With some of the earlier tests scans we noticed that sporadic scan time-outs would occur frequently. This presented a problem (its sort of a false positive) in that a delta is alerted on, but really there hasn’t been a change in listening services or hosts. We increased  the timeout options with nmap but it didn’t help much and only added masses of time on the scans.

The aforementioned issue is one of the issues holding back the nmap ndiff shell script wrapper option, and also ndiff works with XML text files (messy). Shell scripts can work in corporate situations sometimes, but there are problems around the longevity and reliability of the solution. NetDelta is a web-based database (currently MySQL but NoSQL is planned) driven solution with reports and statistics readily available, but the biggest problem with the ndiff option is the scan timeout issues mentioned in the previous paragraph.

NetDelta records host “up” and “down” states and allows the user to configure a number for the number of scans before being considered really down. So if the user chooses 3 as an option, if a target host is down for 3 consecutive scans, it is considered actually down, and a delta is flagged.

Overall the ‘state’ of a host is recorded in the backend database, and the state is a code that reflects a change in the availability or existence of a host. NetDelta has a total of 10 status codes.

Are There Other Ways To Detect NetDeltas?

Remember that we’re covering network services here, i.e. the ‘visibility’ of network services, as they appear to hackers and your customers alike. This is not the same as local host configuration. I can run a netstat command locally to get a list of listening services, but this doesn’t tell me how well my firewall(s) protect me.

  • The ndiff option was covered already
  • Firewall management suites. At least one of these can flag changes in firewall rules, but it still doesn’t give the user the actual “real” view of services. Firewalls can be misconfigured, and they can do unexpected things under certain conditions. The port scanner view of a network is the holy grail effectively – its the absolute/real view that leaves no further room for interpretation and does not require further processing
  • IDS – neither HIDS (host based intrusion detection) nor NIDS (network based intrusion detection) can give a good representation of the view.
  • SIEM – these systems take in logs from other sources so partly by extrapolating from the previous comments, and also with the capability of SIEM itself, it would seem to be a challenge to ask a SIEM to do acrobatics in this area. First of all SIEM is not a cheap solution, but of course this conversation only applies to a case where the organisation already owns a SIEM and can afford the added log storage space, and management overhead, and…of the SIEMs i know, none of them are sufficiently flexible to:
    • take in logs from a port scanning source – theoretically its possible if you could get nmap to speak rsyslogish though, and i’m sure there’s some other acrobatics that are feasible
    • perform delta analysis on those logs and raise an alert

About NetDelta

NetDelta is a Python/Django based project with a MySQL backend (we will migrate to MongoDB – watch this space). Currently at v 1.0, you are most welcome to take part in a trial. We stand on the shoulders of giants:

  • nmap (https://nmap.org/)
  • Python (https://www.python.org/)
  • Django (https://www.djangoproject.com/)
  • Celery (http://www.celeryproject.org/)
  • RabbitMQ (https://www.rabbitmq.com/)
  • libnmap – a Python framework for nmap – (https://github.com/savon-noir/python-libnmap)

Contact us for more info!