Deployment Scenarios
Targets: Non RFC1918 Address Space
This deployment scenario presumes that the intended goal is to deploy a natlas system that is scanning internet-facing hosts from cloud hosts.
Targets: RFC1918 Address Space
This deployment scenario presumes that the intended goal is to deploy natlas to the cloud and do internal network scanning either within cloud-private networks or via network peering.
Targets: Non RFC1918 Address Space
This deployment scenario presumes that the intended goal is to host natlas on your own existing infrastructure and scan hosts on the internet. This might mean that you are hosting natlas-server on your pre-existing server and spinning up natlas-agent instances either on your local servers or via cloud deployment strategies.
Targets: RFC1918 Address Space
This deployment scenario presumes that the intended goal is to host natlas inside your network and scan other hosts inside that network. This (typically) prevents cloud resources from being used for the natlas-agent components, as the agents need to be able to access RFC1918 addresses in your target environment.
Elastic provides docker containers already. Additional documentation can be found on the Elasticsearch documentation. There are also existing terraform deployments.
Important to note that, currently, natlas-server can only talk to one elasticsearch node, regardless of how many are in a cluster.
Natlas-server will typically only be deployed one time in any given scenario. It currently requires too much state to move towards a model that can be load balanced, though that may be desirable in the future.
- Bare system install (
setup-server.sh
) - Docker container (
docker pull docker.natlas.io/natlas-server/natlas-server:latest
) - Terraform or other cloud automation
Core application configuration is configured by environment variables.
- MEDIA_DIRECTORY
- SQLALCHEMY_DATABASE_URI
- SECRET_KEY
A full list of configuration values can be found in natlas-server/README.md
natlas-server/logs/
-
natlas-server/media/
OR MEDIA_DIRECTORY -
natlas-server/metadata.db
OR SQLALCHEMY_DATABASE_URI
Natlas-agent may be deployed one or more times in support of any particular natlas system.
- Bare system install (
setup-agent.sh
) - Docker container (
docker pull docker.natlas.io/natlas-agent/natlas-agent:latest
) - Terraform or other cloud automation
Application configuration (not scan configuration) is all configured by environment variables. A full list of these configuration values can be found in natlas-agent/README.md
natlas-agent/logs/
-
natlas-agent/data/
(Maybe? If we configure an agent to save failed uploads then we will probably want this to be persistent, but otherwise it's not necessary. Maybe we just wantnatlas-agent/data/failures/
to be persistent.)