Author: rp1v0e4enw7i

  • WordList

    WordList

    custom fuzzing wordlist fuzzing_list.txt

    cat urls.txt | sed 's|\(.*\)/[^/]*$|\1|' | cut -d"https://github.com/" -f4,5,6,7,8,9,10,11 | tr "https://github.com/" "\n" | sed '/^$/d' | anew fuzzing_list.txt
    

    custom dns wordlist dns-wordlist.txt

    cat alltargets.txt | sed 's/\.[^.]*$//' | tr "." "\n" | egrep -v '^[0-9]*$' | anew dns-wordlist.txt
    

    scan these urls for nuclei misconfiguration urls-for-nuclei.txt

    cat urls.txt | grep -E "^https?://[^/]+/.+" | cut -d"https://github.com/" -f1-4 | anew -q urls-for-nuclei.txt ;cat urls.txt | grep -E "^https?://[^/]+/.+" | cut -d"https://github.com/" -f1-5 | anew -q urls-for-nuclei.txt ;cat urls.txt | grep -E "^https?://[^/]+/.+" | cut -d"https://github.com/" -f1-6 | anew -q urls-for-nuclei.txt
    

    default-username-password.txt

    curl -s "https://raw.githubusercontent.com/rix4uni/WordList/main/default-username-password.txt"|cut -d":" -f1 | tee -a username.txt && curl -s "https://raw.githubusercontent.com/rix4uni/WordList/main/default-username-password.txt"|cut -d":" -f2 | tee -a password.txt
    

    custom parameters wordlist params.txt

    cat urls.txt | grep "\.php?" | uro | grep "?" | cut -f2 -d"?" | cut -f1 -d"=" | sed '/^\s*$/d'| anew params.txt
    

    custom fuzzing wordlist onelistforall.txt

    curl -s "https://raw.githubusercontent.com/maurosoria/dirsearch/master/db/dicc.txt" | anew -q onelistforall.txt && curl -s "https://raw.githubusercontent.com/six2dez/OneListForAll/main/onelistforallmicro.txt" | anew -q onelistforall.txt && curl -s "https://raw.githubusercontent.com/six2dez/OneListForAll/main/onelistforallshort.txt" | anew -q onelistforall.txt && curl -s "https://raw.githubusercontent.com/ayoubfathi/leaky-paths/main/leaky-paths.txt" | anew -q onelistforall.txt && curl -s "https://raw.githubusercontent.com/Bo0oM/fuzz.txt/master/fuzz.txt" | anew -q onelistforall.txt && curl -s "https://raw.githubusercontent.com/abdallaabdalrhman/Wordlist-for-Bug-Bounty/main/great_wordlist_for_bug_bounty.txt" | anew -q onelistforall.txt && curl -s "https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/raft-large-directories.txt" | anew -q onelistforall.txt && curl -s "https://wordlists-cdn.assetnote.io/data/automated/httparchive_php_2020_11_18.txt" | anew -q onelistforall.txt && curl -s "https://wordlists-cdn.assetnote.io/data/automated/httparchive_aspx_asp_cfm_svc_ashx_asmx_2020_11_18.txt" | anew -q onelistforall.txt && curl -s "https://wordlists-cdn.assetnote.io/data/automated/httparchive_jsp_jspa_do_action_2022_08_28.txt" | anew -q onelistforall.txt
    

    payloads

    • upto top 50 => *-small.txt
    • upto top 500 => *-medium.txt
    • all payloads with no limit => *-large.txt , if more then 50mb then *-large-1.txt, *-large-2.txt

    technologies

    • all technologies with no limit => techname/techname.txt , if more then 50mb then techname/techname-1.txt, techname/techname-2.txt

    nuclei-technologies

    Using: nuclei-wordlist-generator.go

    • techname/techname-unknown.txt
    • techname/techname-info.txt
    • techname/techname-low.txt
    • techname/techname-medium.txt
    • techname/techname-high.txt
    • techname/techname-critical.txt
    • techname/techname-all.txt

    Visit original content creator repository

  • gitcube

    Git Cube – Be a git Power User

    Tips and topics surrounding git.

    Credits

    Thank you creator of the 3d CSS/HTML cube on https://codepen.io/chinchang/pen/lLzyB! This cube was used as the base of this presentation.

    Getting Started with Create React App

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    yarn start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in the browser.

    The page will reload if you make edits.
    You will also see any lint errors in the console.

    yarn test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    yarn build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    yarn eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Visit original content creator repository

  • azmisahin-software-web-component-trace-manager-node

    Introduction

    The code architecture allows a simple method to be monitored.

    Pipelines

    Build Status

    NPM

    Example

    For a quick start, let’s get started.

    Install

    You can add the latest version to your project.

    $ npm i --save trace-manager

    Usage

    Let’s take this example of something to test.

    // package define
    const TraceManager = require('trace-manager')
    
    // module instance
    var tm = new TraceManager()
    
    // any data
    let data = [{ data: 'any' }]
    
    // usage
    tm.trace('trace', ...data)
    tm.debug('debug', ...data)
    tm.info('info', ...data)
    tm.warn('warn', ...data)
    tm.error('error', ...data)
    tm.verbose('verbose', ...data)

    Getting Started

    TO DO: Things to do when getting started on this project.

    Build and Test

    TODO: Will apply your code and project building standards to templates.

    • The tests will be applied as in the template projects.
    • Development and operation will be planned as test/slot/production.

    Contribute

    TODO : The best method of making a contribution will be to comply with the following items.

    • Work with algorithms and flowcharts to solve problems.
    • Make pull requests to version control systems.
    • Stick to Architecture and Design Patterns apps.
    • Take care to develop applications with Domain Based Design / Test-oriented development approaches.
    • Stick to the architectural patterns used in abstraction software like Model-View-Controller.
    • Be consistent in executing maintainable practices with Object Oriented Programming (abstraction, encapsulation, inheritance and polymorphism…) techniques.
    • Use behavior-oriented development tools effectively.
    • Make it a habit to use Integration testing / Unit Testing / Functional Testing / Automation Tests.
    • Be persistent in applying metrics that describe how well the source code has been tested. [ have something to show at meetings: ) ]
    • Send your code with traditional commit messages, make your code understandable with static code analysis tools, “code documentation” tools.
    • Build event-driven, scalable service applications with serverless application development platforms.
    • Follow the steps to improve threading techniques like in services or mobile apps.

    While starting

    In the project; principles and architectural examples of development, code submission, consistent coding styles and development in team environment have been implemented.

    Visit original content creator repository
  • bookkeep

    Django Bookkeep (Inventory & Merchandising)

    This is a bookkeeping (proof-of-concept) project for inventory and retail systems that demonstarted
    double-entry bookkeeping. It is created in django python sqlite3 and works on commandline only.

    Install

    pip install -r requirements.txt
    

    Setup

    python manage.py makemigrations
    python manage.py migrate
    

    Seeder

    You’ll be required to seed the database with a period, use command period:create without arguments
    to create a default period or define one. You can use the command db:base to seed the database
    without transactions or command db:all to seed with ready transactions.

    Usage: seed.py [OPTIONS] COMMAND [ARGS]...
    
    Options:
      --help  Show this message and exit.
    
    Commands:
      db:all          Seed database with sample transactions
      db:base         Seed database without sample transactions
      period:actvt    Activate a different period
      period:create   Define period start and end date.
      purchase:order  Seed database with sample purchase order transactions
      sales:order     Seed database with sample sales order transactions
    

    Tests

    python runtests.py
    

    Usage

    python book.py
    

    Explanation

    This concept works by interfacing sales and purchase transaction of inventory and retail
    to bookkeeping functionality. This accounting methodology utilizes perpetual approach
    as opposed to periodic approach of inventory.

    It is implemented by scheduling transactions before submitting them to accounting.
    Transactions are scheduled pending until the user is satisfied to push sch:push
    them to accounting. Prior to this push, one must create a schedule sch:new in
    which one may add items they hope to purchase via lpo:add or items that are being sold
    sale:add commands.

    Once preferred scheduled transaction is pushed, the transaction can be viewed via trx:last
    command and details of bookkeeping can be viewed via the entry:last command. To record payments
    for purchases command lpo:pay and to record sales receipts command sale:rec are used. In order
    to fulfill a sales return one has to use sale:ret command to undo order by number of units then
    push the resulting schedule sch:push into transactions. Transaction details command are viewable
    via trx:id which is conviniently applicable.

    Note

    It is important to note that a period must be defined first period:create via the seed.py
    utility. Defining a new period will deactivate any perious periods and those transaction will
    no longer be visible. It is important to also finalize all transactions, orders and schedules before closing a period.

    Usage: book.py [OPTIONS] COMMAND [ARGS]...
    
    Options:
      --help  Show this message and exit.
    
    Commands:
      cat:filter    Catalogue filter items
      cat:last      View X number of last catalogues
      cat:new       Create new catalogue item
      entry:last    View X number of last transaction entries
      entry:rev     Reverse a transaction entry
      lpo:add       Add a number of units of a categorized item to a local...
      lpo:pay       Make payment for purchase order
      order:filter  Filter ordered items either by trx_no (transaction number)...
      order:last    View X number of last order items
      order:rev     Revert order only by schedule
      period:last   Last period should be the active period otherwise someone...
      sale:add      Add to sales order.
      sale:disc     Apply sales discount
      sale:rec      Receive payment for sales order
      sale:ret      Sales return units per order
      sch:last      View X number of last schedules
      sch:new       Create new schedule
      sch:push      Push schedule into transaction
      stock:filter  Filter stock items
      stock:last    View X number of stock items
      trx:id        Find transaction by ID or TRXNO.
      trx:last      View X number of last transactions
    

    Contribution

    This project purposely leaves out the implementation of the accounting formula
    Asset = Liabilities + Capital and Income = Revenue – Expenses because it is mainly a proof of concept, it builds up and leaves off just exactly below that threshold.
    This area is where you generate financial statements and summaries. Please feel free to fork
    and implement this tiny remaining area.

    LICENSE

    MIT

    Visit original content creator repository

  • devops-markdown-text-control

    GitHub last commit Semantic-Release GitHub release (latest by date) semantic-release Visual Studio Marketplace

    Azure DevOps Extension: Markdown Text Control

    Azure DevOps extension to show text field control and markdown in same row.

    Learn how to build your own custom control for the work item form.

    More info about developing your own custom web extensions for Azure DevOps Services

    Usage

    1. Clone the repository.

    2. Open the Command Prompt and change to the directory where you cloned the project. For instance, if it is cloned in a folder called “extensions” and saved as “vsts-sample-wit-custom-control”, you will navigate to the following command line.

      cd C:\extensions\vsts-sample-wit-custom-control

    3. Run npm install to install required local dependencies.

    4. Run npm run publish.

    5. In your browser, navigate to your local instance of TFS, http://YourTFSInstance:8080/tfs.

    6. Go to your personal Marketplace.

    7. Click the Marketplace icon in the upper righthand corner.

    8. Click “Browse local extensions” in the dropdown.

    9. Scroll down and click on the “Manage Extensions” widget.

    10. Click the button “Upload new extension”.

    11. Browse to the .vsix file generated when you packaged your extension.

    12. Select the extension, and then click “Open”. Click “Upload” when the button activates.

    13. Hover over the extension when it appears in the list, and click “Install”.

    You have now installed the extension inside your collection. You are now able to put the control on the work item form.

    Make changes to the control

    If you make changes to your extension files, you need to compile the Typescript and create the .vsix file again (steps 4-7 in the “Package & Upload to the marketplace” section).

    Instead of re-installing the extension, you can replace the extension with the new .vsix package. Right-click the extension in the “Manage Extensions” page and click “Update”. You do not need to make changes to your XML file again.

    Make API calls to the work item form service

    Reading data from VSTS/TFS server is a common REST API task for a work item control. The VSS SDK provides a set of services for these REST APIs. To use the service, import it into the typescript file.

    import * as VSSService from "VSS/Service";
    import * as WitService from "TFS/WorkItemTracking/Services";
    import * as ExtensionContracts from "TFS/WorkItemTracking/ExtensionContracts";
    import * as Q from "q";

    Commonly Needed

    API Functions Usage
    VSSService VSS.getConfiguration() Returns the XML which defines the work item type. Used in the sample to read the inputs of the control to describe its behavior.
    WitService getService() Returns an instance of the server to make calls.
    getFieldValue() Returns the field’s current value.
    setFieldValue() Returns the field’s current value using your control.
    getAllowedFieldValues() Returns the allowed values, or the items in a dropdown, of a field.

    How to invoke methods on a service call

    Create an instance of the work item service to get information about the work item. Use one of the service’s functions to get information about the field. This example asks for the allowed values of a field.

    WitService.WorkItemFormService.getservice().then(
        (service) => {
            service.getAllowedFieldValues(this._fieldName), (allowedValues: string[]) => {
                // do something
            }
        }
    )

    Recommendation: use Q with service calls

    To wait on the response of multiple calls, you can use Q. This example shows how to ask for the allowed values and the current value associated with a field using the Q.spread function. You can make two parallel requests, and the code will not be executed until both services have returned a response.

    WitService.WorkItemFormService.getService().then(
        (service) => {
            Q.spread<any, any>(
                [service.getAllowedFieldValues(this._fieldName), service.getFieldValue(this._fieldName)],
                (allowedValues: string[], currentValue: (string | number)) => {
                    //do something
                }
            )
        }
    )

    Structure

    /src                - Typescript code for this extension
    /static/css         - Custom CSS assets for extension
    /static/images      - Image assets for extension and description
    /static/index.html  - Main entry point
    

    Grunt

    Two basic npm tasks are defined:

    • build – Compiles TS files in dist folder
    • publish – Generates the .vsix file to publishes the extension to the marketplace using tfx-cli
    Visit original content creator repository
  • PrimerServer2

    PrimerServer2

    PrimerServer2: a high-throughput primer design and specificity-checking platform

    screenshot

    Description

    PrimerServer was proposed to design genome-wide specific PCR primers. It uses candidate primers produced by Primer3, uses BLAST and nucleotide thermodynamics to search for possible amplicons and filters out specific primers for each site. By using multiple threads, it runs very fast, ~0.4s per site in our case study for more than 10000 sites.

    This repository is based on Python3 and acts as the successor of legacy PrimerServer.

    External Dependencies

    Add these two softwares to your system PATH

    Install

    Don’t use Python 3.9 or above since the primer3-py module hasn’t supported Python 3.9 yet.

    conda create -n primer python=3.8
    conda activate primer
    

    Via PIP (release only)

    $ pip3 install primerserver2
    

    Via Github

    $ git clone https://github.com/billzt/PrimerServer2.git
    $ cd PrimerServer2
    $ python3 setup.py install
    

    Run testing commands

    ** (if installed from pip,) tests/query_design_multiple and tests/example.fa can be obtained from this github repository.
    
    ** full mode: design primers and check specificity
    $ primertool full tests/query_design_multiple tests/example.fa
    
    ** design mode: design primers only
    $ primertool design tests/query_design_multiple tests/example.fa
    
    ** check mode: check specificity only
    $ primertool check tests/query_check_multiple tests/example.fa
    
    

    Input Format (The First Parameter)

    in FASTA Format

    If you have parts of template sequences, you can directly input in FASTA format:

    >site1
    TGTGATATTAAGTAAAGGAACATTAAACAATCTCGACACCAGATTGAATATCGATACAGA
    TACCCCAACTGCCGCCAATTCAACCGACCCTTCACCACAAAAAAACTAATATTTATCAGC
    CAATA[GTTACCTGTGTG]ATTAATAGATAAAGCTACAAAAGCAAGCTTGGTATGATAGT
    TAATAATAAAAAAAGAAAAAACAAGTATCCAAATGGCCAACAAAGGCTGTATCAACAAGT
    >site2
    ACCAGATTGAATATCGATACAGATACCCCAACTGCCGCCAATTCAACCGACCCTTCACCA
    CAAAAAAACTAATATTTATCA[GC]CAATAGTTACCTGTGTGATTAATAGATAAAGCTAC
    AAGCAAGCTTGGTATGATAGTATTAATAATAAAAAAAGAAAAACAAGTATCCAAATGGCC
    

    Note there is a pair of square brackets [] indicating target in each sequences. It means primers should be put around the target. This is the default mode.

    in Text Format

    If you have genomic coordinates for each site (e.g. SNPs), you can input coordinates like:

    seq1 200 10
    seq1 400 10
    

    It means that two sites (one site per line) are needed to design primers. The first site is in seq1 and starts in position 200 and the region length is 10 (means seq1:200-209). The second site is in seq1 and starts in position 400 and the region length is 10 (means seq1:400-409).

    For details, see the wiki.

    Need to run the Web UI?

    Please refer to the wiki.

    Warning: About the reference genome

    If you use reference genomes with many unplaced scaffolds, be caution since such scaffolds with great homology with main chromosomes might influence your results.
    If possible, delete (some or all of ) these unplaced scaffolds.
    For the human genome, we recommend the no_alt_analysis_set, which has all the PAR regions marked with N, to be used.

    Comparison of the CLI and Web version

    CLI Web UI
    Design primers ✔️ ✔️
    Checking specificity ✔️ ✔️
    Progress monitor ✔️ ✔️
    Number of tasks High Low
    Alternative isoforms ✔️
    Exon-exon junction ✔️
    Pick internal oligos ✔️
    Custom Tm temperature ✔️
    Custom max amplicons ✔️
    Visualization ✔️


    Visit original content creator repository

  • sockethub

    Sockethub

    Sockethub

    A protocol gateway for the web.

    Compliance CodeQL Maintainability Release

    About

    Sockethub is a translation layer for web applications to communicate with other protocols and services that are traditionally either inaccessible or impractical to use from in-browser JavaScript.

    Built with modern TypeScript and powered by Bun, Sockethub is organized as a monorepo containing multiple packages that work together to provide a robust, extensible platform gateway.

    Using ActivityStreams (AS) objects to pass messages to and from the web app, Sockethub acts as a smart proxy server/agent, which can maintain state, and connect to sockets, endpoints, and networks that would otherwise, be restricted from an application running in the browser.

    Originally inspired as a sister project to RemoteStorage, and assisting in the development of unhosted and noBackend applications, Sockethub’s functionality can also fit into a more traditional development stack, removing the need for custom code to handle various protocol specifics at the application layer.

    Example uses of Sockethub include:

    • Chat protocols: XMPP, IRC

    • Feed processing: RSS, Atom feeds

    • Metadata discovery: Link preview generation, metadata extraction

    • Protocol translation: Converting between web-friendly ActivityStreams and traditional protocols

    Additional protocols like SMTP, IMAP, Nostr, and others can be implemented as custom platforms.

    The architecture of Sockethub is extensible and supports easy implementation of additional ‘platforms’ to carry out tasks.

    Documentation

    Features

    We use ActivityStreams to map the various actions of a platform to a set of AS ‘@type’s which identify the underlying action. For example, using the XMPP platform, a friend request/accept cycle would use the activity stream types ‘request-friend’, ‘remove-friend’, ‘make-friend’.

    Platforms

    Making a platform is as simple as creating a platform module that defines a schema and a series of functions that map to ActivityStream verbs. Each platform can be enabled/disabled in the config.json.

    Currently Implemented Platforms

    • Feeds – RSS and Atom feed processing
    • IRC – Internet Relay Chat protocol support
    • XMPP – Extensible Messaging and Presence Protocol
    • Metadata – Link preview and metadata extraction

    Development Reference

    • Dummy – Example platform implementation for developers

    For platform development guidance, see the Platform Development documentation.

    Quick Start

    Prerequisites

    • Bun v1.2+ (Node.js runtime and package manager)
    • Redis server (for data layer and job queue)

    Installation & Development

    # Install dependencies
    bun install
    
    # Start Redis (required for data layer)
    # - Using Docker: docker run -d -p 6379:6379 redis:alpine
    # - Using system package manager: brew install redis && brew services start redis
    
    # Build and start development server with examples
    bun run dev

    Browse to http://localhost:10550 to try the interactive examples.

    Production

    # Build for production
    bun run build
    
    # Start production server (examples disabled)
    bun run start

    Development Commands

    bun test                    # Run unit tests
    bun run integration         # Run integration tests (requires Redis + Docker)
    bun run lint                # Check code style
    bun run lint:fix           # Auto-fix linting issues

    Environment Variables

    For debugging and configuration options, see the Server package documentation.

    Debug logging:

    DEBUG=sockethub* bun run dev

    Packages

    Core Infrastructure

    Interactive Demos

    Platform Implementations

    Utilities

    Credits

    Project created and maintained by Nick Jennings

    Logo design by Jan-Christoph Borchardt

    Sponsored by NLNET

    NLNET Logo

    Visit original content creator repository
  • dotnet-database-testcontainers-example

    dotnet-database-testcontainers-example

    Introduction

    When creating integration tests it can often be that external resources such as databases are:

    • Based upon a SQLite database (via Microsoft.EntityFrameworkCore.Sqlite)
    • Based upon an in-memory database (via Microsoft.EntityFrameworkCore.InMemory)

    In both of these cases you are not testing against the actual database you would be using in your production environment. If you run against PostgreSQL in production, you should really be running your integration tests against the same database platform.

    The challenge of this as close-to-production style of integration testing is having a PostgreSQL database consistently available without having to perform a lot of manual intervention. This is before we even begin to consider how that database would be seeded with a schema and data to support whatever tests we intend to run.

    Introducing Test Containers

    The Test Containers open source project solves this problem by providing throwaway instances of databases, message brokers and a whole host of other services via Docker. Test Containers allow us to:

    • Mirror as close to production infrastructure in our integration tests
    • Programatically spin up and configure containers before our integration tests commence
    • Automatically destroy containers upon completion of integration tests

    You can view all the supported module/container types here and you will find great supporting documentation for use with the .NET Framework here.

    For the purposes of this example we want to use a PostgreSQL container loaded with movie data to support our /api/movies endpoint in our API project.

    Note: to use this project you will need to ensure you have Docker installed and running on your workstation.

    Running/Debugging The Application

    Setting up a PostgreSQL instance for development/debugging

    There is docker-compose.yml file in the root of the cloned repository. If you do:

    docker compose up -d
    

    This will start the database. If you do:

    docker compose down --volumes
    

    This will stop the PostgreSQL container and remove its associated volumes. If you wish to keep re-using the container once it is created just omit the --volumes part of the above command and your data will remain between up/down operations.

    How is the database schema and data created ?

    In the root of the cloned repository you will find the etc/docker-entrypoint-initdb.d directory. This directory is mounted into the container when it is created. This directory contains a file called 01-create-movies-db.sql which will be executed within the container the first time it starts. This script creates:

    • A new database called movies
    • A new user called moviesuser with a password and appropriate permissions
    • Connects to the movies database and creates the tables and data required by the application

    The appsettings.json file in the API project has a connection string called MoviesDb that can connect to this container whilst running and debugging.

    The test/Integration Project.

    In this project we depend upon the following Test Containers nuget package:

    • Testcontainers.PostgreSql

    This nuget package provides the ability to create short-lived PostgreSQL docker image configured with:

    • A specific PostreSQL image/version
    • A named database with a username and password
    • Port Bindings
    • Volume Bindings

    The tests in this project make use of xUnit and Fluent Assertions to orchestrate our tests.

    MoviesControllerTests

    If we look at the test

    • TestingContainersExample.Tests.Integration.API.Controllers.MoviesControllerTests

    we want to have our database available for the lifetime of the test suite. We can use a WebApplicationFactory alongside a PostgreSqlContainer test to achieve this.

    How does the test PostgreSQL container get created ?

    The test class implements/extends IClassFixture<IntegrationTestWebApplicationFactory>. xUnit class fixtures are a shared context that exists for all tests in the class. In our case this is:

    • A WebApplicationFactory hosting our API.
    • An instance of a PostgreSqlContainer we can connect to from our MoviesDbContext in the service collection of the web application.

    In the constructor of IntegrationTestWebApplicationFactory we use a PostgreSqlBuilder to define what our PostgreSQL image should contain. You will see it:

    • Uses the very latest PostgreSQL image postgres:latest
    • Names the database movies
    • Creates a user called moviesuser with an associated password
    • Mounts the scripts/docker-entrypint-initdb.d directory of the project into /docker-entrypoint-initdb.d within the container. This directory contains the script 01-create-movies-db-data.sql which will create our schema objects and data when the container starts.
    • Assigns a random external port from the container which can be used to connect to the database

    This class also implements IAsyncLifetime and when the instance is given to the test class the InitializeAsync() method is called. This will trigger the PostgreSQL Test Containers instance to start. When the test class finishes with the fixture the DisposeAsync() method will be called. This stops the PostgreSQL test container and destroys/removes it from your docker instance.

    How Does The Web Application Wire To The Database In The Test Container ?

    This same class extends WebApplicationFactory<Program>. When the overriden ConfigureWebHost method is invoked it:

    • Finds and removes the DbContextOptions<MoviesDbContext> which was already loaded into the service collection with services.AddDbContext<MoviesDbContext> in program.cs
    • Calls services.AddDbContext<MoviesDbContext> to re-create the MoviesDbContext using the connection string obtained from the test container.

    As we define the container with .WithPortBinding(5432, assignRandomHostPort: true) this means this PostgresSQL instance will not collide with any other instances of Postgres you may have running in Docker (especially if they use the default port 5432)

    MoviesServiceTests

    This test provides a slightly different approach to using the PostgreSQL test container. Within this test we are only testing the TestingContainersExample.Common.Services.MovieService class so we don’t need the whole application to be spun up – we only require a MoviesDbContext in order to exercise the service.

    If you look at TestingContainersExample.Tests.Integration.Fixtures.MoviesDbContextFixture you will see that this class only provides a mechanism to obtain a MoviesDbContext. The test goes on to create an instance of the MoviesService to which it can provide the MoviesDbContext created by the fixture class.

    Conclusion

    Using test containers provides a very nice mechanism to allow you to test your code in a way that more closely matches your production infrastructure. In this instance we are only making use of a database test container but if you require other services like Kafka, RabbitMq, Redis etc. they are all supported. Thanks for looking.

    Visit original content creator repository

  • mindra

    mindra

    A command-line wrapper for diagrams and gloss so we can leverage them outside haskell.

    The goal is to provide a good subset of features from both libraries.

    See mindra-clj for an example of a client library. It talks to mindra via stdin/stdout using just formatted text.

    Current status

    Diagrams

    Only the SVG backend is supported, and only a very small subset of diagrams is exposed. See svg-parser for what is supported and how the commands are parsed into diagram(s).

    See mindra-clj-diagrams for some examples.

    Gloss

    Most of the gloss features are supported. We should be able to use mindra for creating both static pictures and animations (with event handling!). See gloss-parser for what is supported and how the commands are parsed into gloss picture(s).

    See mindra-clj-gloss for some examples.

    Installation

    Linux and Mac

    Install:

    brew install rorokimdim/brew/mindra

    Upgrade:

    brew upgrade mindra

    Uninstall:

    brew uninstall mindra

    Windows

    Binaries are available at releases.

    Others

    No pre-built binaries available at this time. We will need to build from source using stack install or cabal install.

    Install stack, clone this repository and run the following in repository directory.

    stack install

    Basic usage

    A. Start mindra command

    mindra

    It should print READY INIT which means it is ready to receive the INIT (initialization) command.

    B. Initialize it for either diagrams or gloss

    For diagrams

    Configure for SVG of size 300px by 400px:

    INIT Diagrams SVG 300 400
    
    

    Note: Each command should be followed by a blank line.

    For gloss

    Configure for a window of size 500px by 500px, at position 10px, 10px on the screen, with the title “My Title”, and white background color (red, green, blue, alpha values):

    INIT Gloss
    Window 500 500 10 10 "My Title"
    Color 255 255 255 255
    
    

    Note: Each command should be followed by a blank line.

    C. Draw something

    For diagrams

    SVG Circle 100
    
    

    For gloss

    PICTURE Circle 100
    
    

    Note: Each command should be followed by a blank line.

    Hit ESC to close window.

    Credits

    1. Haskell
    2. Diagrams and Gloss
    3. All of these libraries and all the things they depend on

    Visit original content creator repository

  • RSSI-based-OFDM-signal-classification

    RSSI-based-OFDM-signal-classification

    Due to limited licensed bands and the ever increasing traffic demands, the mobile communication industry is striving for offloading licensed bands traffic to unlicensed bands. A lot of challenges come along with the operation of LTE in unlicensed bands while co-locating with legacy Wi-Fi operation in unlicensed band. In this co-existing environment, it is imperative to identify the technologies so that an intelligent decision can be made for maintaining quality of service (QoS) requirement of users.

    Next to this unlicensed co-existing environment, a second concern is the sharing of the licensed bands where DVB-T operates. This is called white space reuse. The reuse factor used in DVB-T systems leads to unused spectrum at a given location. Users can opt to use this spectrum if and only if no DVB-T transmission is present and they transmit using less power than TV broadcast stations. It is thus necessary to periodically sense if the spectrum is unused by the primary user or other secondary users. On the other hand the primary user, the TV broadcast stations, will want to detect if there is illegal use of their licensed spectrum at the time they want to use it.

    Manual feature extraction vs autonomous feature learning

    Wireless technology identification can be implemented in multiple ways. We decided to use machine learning techniques, given many recent breakthroughs and success in other domains. Furthermore, it allows learning identifying wireless technologies on its own by giving it data. How we captured this data is described in the next section.

    We consider two techniques for machine learning: one where we manually extract features using export knowledge and one where we give raw RSSI data to the machine learning model. The second technique exploits the autonomous feature learning capabilities of neural networks.

    We manually extracted the following features:

    • r0,r1,…,r19 are 20 intervals selected from the input histogram. r0 corresponds with the most left part of the histogram with frequency > 0, while r19 represents the most right part of the histogram with frequency > 0. Each interval thus contains 5% of the histogram and its value resembles the frequency of RSSI values within the corresponding interval.
    • minR is the minimum RSSI value with frequency > 0 and thus the left boundary of the histogram.
    • maxR is the maximum RSSI value with frequency > 0 and thus the right boundary of the histogram.
    • nP is the measured amount of peaks in the histogram.
    • wP is the width of the highest peak.
    • stdHist is the standard deviation of the histogram values.
    • stdData is the standard deviation of the RSSI values upon which the histogram is calculated.
    • meanData is the mean of the RSSI values upon which the histogram is calculated.
    • medianData is the median of the RSSI values upon which the histogram is calculated.

    Manually feature extraction allows faster signal classification, but requires expert knowledge. The autonomous feature learning model is more flexible because it adapts to new situations given enough useful data. Using complex DNN models also allows slightly higher accuracy (98%) than manual feature selection methods (97%).

    Dataset description

    We used two datasets that are part of the eWINE project.

    The first dataset, used for training, was captured at various locations in Ghent, Belgium. The dataset can be found here.

    A second dataset, used for validation, was captured at Dublin, Ireland. The dataset can be found here.

    Model description

    The models for both manual and automatic feature extraction are present in manual feature extraction/rssilearningmanual.m and automatic feature learning/neuralnetworkautomatic.m respectively. Manual feature extraction uses the features as described before as input, while automatic feature extraction uses 256 RSSI values which are derived from 16 IQ samples per RSSI value. The neural network architecture of the manual model can be seen below.

    Neural network - manual feature extraction - 29 input nodes / 25 hidden nodes / 10 hidden nodes / 3 output nodes for Wi-Fi, LTE and DVB-T

    Contact

    For further information, you can contact me at jaron.fontaine@ugent.be.

    Visit original content creator repository