Friday, December 6, 2019

How To Add A JPEG File Analyzer To Zeek - Part 1

by Keith J. Jones, Ph.D


This blog post will walk you through the process of adding a JPEG file analyzer to Zeek. Please keep in mind that our main goal in this blog series is to “teach a person to fish” along with a few small fish to get started as bait rather than simply providing an explanation of how to add JPEG support to Zeek. While it may be great to add field “XYZ” for the JPEG in this blog post, our main point of this series is to demonstrate how you could add any file analyzer with repeatable processes. It will be relatively simple to add additional fields from JPEG images using the methods we present in this blog series.

We will first discuss why you might want to add a custom file analyzer. Then, we will walk you through a brief overview of the File Analysis Framework (FAF) so that we can figure out where we will put our new file analyzer’s source code. Lastly, we will present the source code to create a simple JPEG file analyzer stub. You can find the complete source code for this article available in this branch on GitHub:

The difference between this source code and master, at the time this post was written, can be found at:

Note that to understand this article we assume you are familiar with C++, Zeek scripting, and compiling Zeek from source.

Why Would You Want To Do This?

Zeek comes out of the box with a number of file analyzers already. There are file analyzers to calculate a file’s entropy, analyzers to extract files to a local hard drive, analyzers to parse Portable Executable (PE) files, and there are also analyzers to calculate file hashes (MD5, SHA1, and SHA256). You may have a new idea for a file analyzer but you may not know how to add your custom code to Zeek. If this sounds like your problem, then this article should help you because we are going to add a simple JPEG analyzer in much the same way.

For this task you must delve into the Zeek C++ source code for the file analysis framework located in the directory. We will discuss adding a JPEG file analyzer and explore the components inside Zeek that will allow for you to do this.

Enable Debugging

In order to view the logs we will need for development, we will need to enable debugging in Zeek. This can be done with the following configure command before you build Zeek from source:

./configure --enable-debug

After the source has been configured, build and install Zeek with the standard compilation commands:

sudo make install

Next, to generate the debugging logs for the FAF you must execute Zeek directly with the “-B file_analysis” command line option. This command line option is new, and it is now available because we enabled debugging with configuration command prior to building Zeek. You can also generate debugging messages for other portions of Zeek with the “-B” option, but that is outside the scope of this article.

At this point, you should create a file named “jpeg.zeek” and it should contain the following content:

event file_new(f: fa_file)
print "file_new";
print f;

We will be executing Zeek in the following manner, so you should be able to execute the same command without error:

zeek -B file_analysis -r http.pcap jpeg.zeek

The pcap file is located at and should be downloaded if you want to follow along. After executing the command line above, you should be able to see a debug.log created in your current directory. If this file exists, and it has content, then you are ready to move on to the next section.

File Analysis Framework Overview

The file analysis framework is a collection of:

  1. File magic signatures
  2. Built in functions (.bif)
  3. Zeek scripts
  4. C++ plugins

Each item above is important to the overall file analysis plug creation process. The first item, file magic signatures, is used to identify files on your network. Here, the signature will be looking for JPEG files, which have a very well known pattern in the first three bytes. Once the new files are identified, they are processed using a series of built in functions, Zeek scripts, and C++ plugins in the three remaining items listed above. Zeek’s process of identifying a file entering the file analysis framework can be simplified with the following diagram:

The diagram above shows that files are first identified by magic signatures. This is the same type of signature you may be familiar with through the Unix “file” command. Luckily, JPEG signatures are already defined in with the other image file types. If there is a match, Zeek passes the newly identified file to a FAF manager. The following diagram looks deeper into how the FAF accepts new file data:

Before we discuss the components of the FAF above, it is important to remember that the file data is not provided to the FAF all at once, since files traversing your network are seen across multiple network packets. Also keep in mind that portions of the file can be received out of order. Luckily, there is some file reassembly logic already built into Zeek and the FAF to handle this.

Data is passed from Zeek to the manager (, through the file C++ class (, and finally to the analyzers ( is one such example). The data will be transferred between the classes and functions listed above using streams. What this means is that each function will minimally expect a buffer size and a length of that buffer, and rarely will that buffer contain the whole file unless the file is extremely small. Optionally, the offset in the file may also be supplied to these functions.

Since the data is delivered as a stream, we will not have a whole file for analysis all at once and it would be unwise to buffer everything to have the whole file because we will quickly run out of memory. This is markedly different than traditional host based forensics and presents very unique software development challenges when we create a new file analysis plugin for a network traffic analysis framework like Zeek.

Creating The JPEG File Analysis Plugin

Now that the fundamentals are out of the way, creating a new JPEG file analysis plugin is a basic eight step process. Each step will be discussed in its own subsection below. The changes between master at the time this article was written and the changes discussed here can be viewed at:

Step 1: Copy The PE Plugin

A working file analysis plugin is found in the “pe” directory. Copy this directory to the “file_analysis/analyzer” directory and call it “jpeg”. Next, rename the files to so your directory structure looks like the following:

Rename the “pe” to “jpeg” in the *.pac files. These files are binpac ( files, and they define the parser for PE and JPEG, shortly. Delete the binpac file with “idata” in the name, as this is PE specific and we will not use it with JPEG.

Step 2: Modify The CMake Files

The “CMakeLists.txt” in the “jpeg” directory should have the following content, which is based on the original PE plugin. Notice that the “PE” phrases have been translated to “JPEG” for the new plugin we are creating.


include_directories(BEFORE ${CMAKE_CURRENT_SOURCE_DIR}

zeek_plugin_begin(Zeek JPEG)

The “CMakeLists.txt” in the “analyzer” directory should have the following content so that it includes the “jpeg” directory:


Step 3: Rename the PE Class to the JPEG Class

While renaming the class, remove the logic so that it becomes a stub for the logic we would like to add later. In “JPEG.h”, the content of your new file will be:

#pragma once

#include <string>

#include "Val.h"
#include "../File.h"
#include "jpeg_pac.h"

namespace file_analysis {

* Analyze Portable Executable files
class JPEG : public file_analysis::Analyzer {


      static file_analysis::Analyzer* Instantiate(RecordVal* args, File* file)
             { return new JPEG(args, file); }

      virtual bool DeliverStream(const u_char* data, uint64_t len);

      virtual bool EndOfFile();

  JPEG(RecordVal* args, File* file);
      binpac::JPEG::File* interp;
      binpac::JPEG::MockConnection* conn;
      bool done;

} // namespace file_analysis

“” should look like the following:

#include "JPEG.h"
#include "file_analysis/Manager.h"

using namespace file_analysis;

JPEG::JPEG(RecordVal* args, File* file)
    : file_analysis::Analyzer(file_mgr->GetComponentTag("JPEG"), args, file)
       conn = new binpac::JPEG::MockConnection(this);
       interp = new binpac::JPEG::File(conn);
       done = false;

mgr.QueueEventFast(file_jpeg, {


        delete interp;
        delete conn;

bool JPEG::DeliverStream(const u_char* data, uint64_t len)
       if ( conn->is_done() )
              return false;

              interp->NewData(data, data + len);

       catch ( const binpac::Exception& e )
               return false;

       return ! conn->is_done();

bool JPEG::EndOfFile()
       return false;

The binpac files may feel convoluted, so an explanation is in order. The binpac file “JPEG.pac” is used first. The content of this file is the following:

%include binpac.pac
%include bro.pac

analyzer JPEG withcontext {
       connection: MockConnection;
       flow: File;


connection MockConnection(bro_analyzer: BroFileAnalyzer) {
       upflow = File;
       downflow = File;


%include jpeg-file.pac

flow File {
        flowunit = JPEG_File withcontext(connection, this);

%include jpeg-analyzer.pac

This sets up an analyzer called “JPEG” with a connection to a flow called “File”. The structure “JPEG_File” is found in the included file “jpeg-file.pac”. The content of “jpeg-file.pac” is:

%include jpeg-file-types.pac
%include jpeg-file-headers.pac

# The base record for a JPEG file
type JPEG_File = case $context.connection.is_done() of {
       false -> JPEG : JPEG_Image;
       true -> overlay : bytestring &length=1 &transient;

type JPEG_Image = record {
       headers : Headers;
       pad : Padding(padlen);
} &let {
       padlen: uint64 = 100;
} &byteorder=bigendian;

refine connection MockConnection += {

              bool done_;

              done_ = false;

       function mark_done(): bool
              done_ = true;
              return true;

       function is_done(): bool
              return done_;


The included “jpeg-file-types.pac” has the following content:

# The BinPAC padding type doesn't work here.
type Padding(length: uint64) = record {
pad: bytestring &length=length &transient;

This new type is used by the other binpac files, and it was carried over from the PE plugin. The other included file “jpeg-file-headers.pac” defines the header structure and has the following content:

type Headers = record {
       jpeg_header : JPEG_Header;
} &let {
       # Do not care about parsing rest of the file so mark done now ...
       proc: bool = $context.connection.mark_done();


type JPEG_Header = record {
        soi : bytestring &length=2;
        app : bytestring &length=2;

Most of the code is a copy from the PE version of the same file, with the headers shortened to just the three bytes expected at the beginning of every JPEG file ( or An example JPEG from this pcap trace looks like the following:

Notice that the first two bytes are “FFD8”. The first field “soi” is the “Start of Image” and are these two bytes. The next field is “app”, the application number, and is 0xFF followed by another byte that will define the application. The “soi” will always be 0xFFD8 and the “app” will always start with 0xFF for a JPEG image. Additional JPEG structures will be discussed in the second article.

The last binpac file is named “jpeg-analyzer.pac” and has the following content:

#include "Event.h"
#include "file_analysis/File.h"
#include "events.bif.h"



refine flow File += {

        function proc_jpeg_header(h: JPEG_Header): bool


            if ( file_jpeg )


                        DBG_LOG(DBG_FILE_ANALYSIS, "PROCESSING A JPEG!!!");


               return true;


refine typeattr JPEG_Header += &let {
    proc : bool = $context.flow.proc_jpeg_header(this);

The script above adds a function to parse the “JPEG_Header” headers. The parsing will output some debugging logs with the “DBG_LOG” function, but additional logic could be added to parse additional JPEG attributes if so desired.

Step 4: Define The Plugin

Next, edit to be the following so that the new “JPEG” file analysis plugin exists:

// See the file in the main distribution directory for copyright.

#include "plugin/Plugin.h"

#include "JPEG.h"

namespace plugin {
namespace Zeek_JPEG {

class Plugin : public plugin::Plugin {
       plugin::Configuration Configure()
               AddComponent(new ::file_analysis::Component("JPEG", 

               plugin::Configuration config;
      = "Zeek::JPEG";
               config.description = "JPEG analyzer";
               return config;

} plugin;


The code above links the C++ source code stub we are creating to a component we can call through Zeek scripts. The call will come in step 7.

Step 5: Create A New Event

Open “events.bif” and make sure it matches the following:

## This event is generated each time file analysis identifies
## a jpeg file.
## f: The file.
event file_jpeg%(f: fa_file%);

This will create a single new event called “file_jpeg” that we will use later. This file is automagically compiled into C++ source code by CMake, so just listing the new event here is enough to create it!

Step 6: Add the Zeek Scripts for JPEG Handling

Next, you will create a subdirectory to the file analysis script directory at called “jpeg”. Inside this directory you will need two files. The first file is named “__load__.zeek” and contains a single line:

@load ./main

The other file is named “main.zeek” and contains the following script, and is very similar to the PE script:

module JPEG;

export {

       redef enum Log::ID += { LOG };

       type Info: record {
              ## Current timestamp.
              ts: time &log;
              ## File id of this portable executable file.
              id: string &log;

       ## Event for accessing logged records.
       global log_jpeg: event(rec: Info);

       ## A hook that gets called when we first see a JPEG file.
       global set_file: hook(f: fa_file);


redef record fa_file += {
        jpeg: Info &optional;

const jpeg_mime_types = { "image/jpeg" };

event zeek_init() &priority=5
        Files::register_for_mime_types(Files::ANALYZER_JPEG, jpeg_mime_types);
        Log::create_stream(LOG, [$columns=Info, $ev=log_jpeg, $path="jpeg"]);

hook set_file(f: fa_file) &priority=5
        if ( ! f?$jpeg )
               f$jpeg = [$ts=network_time(), $id=f$id];

event file_jpeg(f: fa_file) &priority=5
       hook set_file(f);

event file_state_remove(f: fa_file) &priority=-5
       if ( f?$jpeg )
              Log::write(LOG, f$jpeg);

This script sets up the logging and attaches our new JPEG analyzer to any file determined to be a JPEG file via its inferred MIME type. Because of the location of this script, it will be loaded automatically like the PE plugin. Right now, the JPEG plugin simply outputs the JPEG file’s ID number and a timestamp. Additional logic will be added to this plugin later, but this code will allow us to see that our new plugin works after we compile.

Step 7: Use The New Event

Make your “jpeg.zeek” script content the following.

event file_jpeg(f: fa_file)
   print "file_jpeg";
   print f$jpeg;

The script prints the file JPEG information we just created.

Step 8: Compile And Test

After every substantial change you will want to compile and test your changes. The following commands will accomplish this:

sudo make install
zeek -B file_analysis -r http.pcap jpeg.zeek

With “make”, as long as you haven’t executed “make clean” recently you will only need to rebuild portions of Zeek that have changed. This will substantially improve your compile time.

After execution, you should see lines in your “debug.log” file similar to the following:

1320279566.886920/1574191165.328912 [file_analysis] [FFTf9Zdgk3YkfCKo3] Add analyzer JPEG

If you open your “files.log” file, you will see “JPEG” show up in the analyzers for each JPEG file, and not files that are not JPEG. This proves that the JPEG file analyzer we created is being attached to JPEG files processed by Zeek. You should also see your debugging lines that demonstrate the binpac file parsing:

1320279566.886920/1574292315.388565 [file_analysis] TRYING TO PROCESS A JPEG!!!
1320279566.886920/1574292315.388569 [file_analysis] PROCESSING A JPEG!!!

The basics you have learned in this article will become an iterative process you should be familiar with as we improve our JPEG file analyzer in the next article.


This article walked you through the process of enabling debugging in Zeek, copying a working plugin, and modifying that plugin so it would become a new JPEG file analysis plugin stub. The source code created for this article is available in the GitHub branch at The next article will address adding additional logic to our JPEG file analyzer along with the types of data our analyzer will output to the rest of Zeek.


About Keith J. Jones, Ph.D

Dr. Jones is an internationally industry-recognized expert with over two decades of experience in cyber security, incident response, and computer forensics. His expertise includes software development, innovative prototyping, information security consulting, application security, malware analysis & reverse engineering, software analysis/design and image/video/audio analysis.

Dr. Jones holds an Electrical Engineering and Computer Engineering undergraduate degrees from Michigan State University. He also earned a Master of Science degree in Electrical Engineering from MSU. Dr. Jones recently completed his Ph.D. in Cyber Operations from Dakota State University in 2019.

Wednesday, November 13, 2019

What is ‘Weird’ in Zeek?

By:  Fatema Bannat Wala, Security Engineer, University of Delaware

As you probably know, Zeek transforms network traffic into real-time logs used by threat hunters, incident responders, and network operators.

Most of these logs correspond to common network protocols, but there are a few interesting exceptions. The most intriguing exception may be the Zeek log called ‘weird’. The weird.log records unusual or exceptional activity that might indicate malformed connections, traffic that doesn’t conform to a particular protocol, malfunctioning or misconfigured hardware, or even an attacker attempting to avoid/confuse a sensor.

Not all ‘weird’ traffic is malicious. But when Zeek finds network communication that don’t comply with RFC standards and definitions, that can be a sign of something interesting and worth exploring. And it might or reveal information about activity that is hard to notice in the traffic, otherwise. It is important to keep in mind, though, that most of the logged information won’t be anything unusual; large networks typically exhibit many of the underlying activities triggering Zeek’s ‘weird’ records.

Types of Weird

There are MANY types of weirds defined in Zeek, at least 200 seen triggered in network traffic. Common ones include:

  • DNS_RR_unknown_type
  • Dns_unmatched_msg
  • Dns_unmatched_reply
  • fragment_with_DF
  • bad_ICMP_checksum
  • DNS_Conn_count_too_large
  • possible_split_routing
  • inappropriate_FIN
  • TCP_Christmas
  • data_after_reset
  • truncated_header
  • data_before_established
  • SYN_seq_jump
  • SYN_with_data
  • TCP_ack_underflow_or_misorder
  • DNS_truncated_RR_rdlength_lt_len
  • line_terminated_with_single_CR
  • DNS_RR_length_mismatch
  • connection_originator_SYN_ack

To check the weirds triggered in your environment run following command:

2,603,914 DNS_RR_unknown_type
2,160,812 possible_split_routing
2,092,811 inappropriate_FIN
   753,398 fragment_with_DF
     18,343 bad_ICMP_checksum

The above example is showing the statistics of the most triggered weirds in a university environment over a period of 24 hours.

Where to find Weirds?

Sometimes it’s very helpful to know the cause of ‘weird’ records while analyzing the weird.log file. This knowledge can help analysts categorize a ‘weird’ as benign or malicious. Unfortunately, there’s no comprehensive documentation of all weirds; they are defined at various locations throughout the source code of Zeek. The conditions that trigger the weird notices are mainly defined in the following locations:
  • In source code of Zeek IDS (in .cc files)
  • In script land, in base/ policy/ folders (in various .zeek scripts)
When triggered by network traffic, weird notices are logged into a separate log file called “weird.log” in Zeek. The logging of different weirds can be controlled by base/frameworks/notice/weird.zeek script, which DOES NOT consist all the weirds that are defined in Zeek. It ONLY has a subset of weirds showing what action to take when they get triggered. Hence any additional weird, which is not already found in weird.bro, can be defined and the action for the weird can be controlled by the script.

Investigating Weirds

Following are a few examples of how to go about investigating the triggered weirds in the network:

1. DNS_RR_unknown_type:

Defined: The condition that causes this weird type to get triggered and logged is defined in src/analyzer/protocol/dns/

Cause: If you look into the code, the condition that triggers this weird is for the RR types that are currently not parsed in Zeek.

Remediation: If the RR type ID recorded in the weird notices belong to the valid RR types defined for DNS protocol, then those notices can be safely ignored, or the RR types parsing support can be written in Zeek to support the parsing for those RR types.

2. possible_split_routing

Defined: The condition that causes this weird type to get triggered and logged is defined in src/analyzer/protocol/tcp/

Cause: When Zeek doesn’t see the other side of the connection, signifying possible split routing.

Remediation: Look for the possible asymmetric routing or split routing caused by any misconfigurations in the network. It might also indicate traffic not properly getting load balanced (symmetric hashing) between the zeek sensors and the different packets belonging to the same connection stream going to different Zeek workers.

3. inappropriate_FIN

Defined: The condition that causes this weird type to get triggered and logged is defined in src/analyzer/protocol/tcp/

Cause: When Zeek sees a packet with a FIN set during a connection, which does not comply with RFC for TCP/IP standard.

Remediation: Sometimes this weird is tied up with the inappropriate_FIN, which is discussed earlier, and remediating that weird also results in the suppression of this weird. Zeek has many traps to catch the similar weird activity that is related to each other. Hence getting one weird remediated can result in few other related weirds to disappear from the logs.

Here’s some information about a few other weirds that might potentially signify malicious traffic or other problems:
bad_ICMP_checksum – defined in src/analyzer/protocol/icmp/
TCP_Christmas – defined in src/analyzer/protocol/tcp/

Reason: bad_ICMP_checksum / TCP_Christmas weird notices are seen to be triggered by the scanners, sweeping the range of IPs on the network.

Remediation: These weird notices don’t appear to be noisy, depending on your network, and blocking the offending IPs might be potential action to protect the network. For bad_ICMP_checksum, one should be careful with the blocking action, as this notice is seen triggered often by the traceroutes or actions taken for network troubleshooting, and blocking the source IPs might cause the adverse results. Generally having a threshold of notices per host for this type of weird is a good idea for taking any action against the offending IPs.

I have a lot more to tell you about weird logs in the future, so stay tuned for future installments in this series!

Tuesday, November 12, 2019

ZeekWeek 2019 - Summary and Slides

The global community of Zeek developers and users gathered together in Seattle last month, October 8-11, for the annual ZeekWeek (formerly BroCon) event. 

171 network security professionals representing 84 organizations travelled from all over the world to share ideas and knowledge of Zeek.

This year’s event consisted of 2 Zeek training sessions, 17 presentations, lightning talks, a community Q&A panel discussion, and more.

In case you missed this year’s event, here is a list of all the talks as well as all the slides that were made available to organizers. The full agenda and talk descriptions can be found on the website. (Please note: There will be a message that states this event has already happened; just hit the escape key and it will go away. Also, videos coming soon!)


8 October 2019 - Pre-conference Training

(Training slides available for attendees only)
  • Intro to Zeek, Keith Lehigh, Indiana University
  • Making Sense of Encrypted Traffic, Matt Bromely and Aaron Soto

9 October 2019 - ZeekWeek Day 1 - Sessions

  • Opening Remarks, Keith Lehigh, Indiana University (Slides)
  • Keynote: The Threats are Changing, So are We as Defenders, Freddy Dezure, Founder and former Head CERT-EU (Slides)
  • eZeeKonfigurator: Web Frontend for the Config Framework, Vlad Grigorescu, ESnet (Slides)
  • BZAR – Bro/Zeek ATT&CK-based Analytics and Reporting, Mark Fernandez, Lead Cybersecurity Engineer The MITRE Corporation (Slides)
  • Run, Zeek, Run!, Jim Mellander, Cybersecurity Engineer, ESnet (Slides)
  • DNSSEC Protocol Parser - A Case Study, Fatema Bannat Wala, Security Engineer, University of Delaware (Slides)
  • Profiling in Production, Justin Azoff, Corelight (Slides)
  • Identifying Small Heavy-Hitter Flows Using Zeek to Optimize Network Performance, Jordi Ros-Giralt, Managing Engineer, Reservoir Labs (Slides)

10 October 2019 - ZeekWeek Day 2 - Sessions

  • 7 Years with Zeek on Commodity Hardware, Michal Purzynski. Engineer, Mozilla Corporation (Slides)
  • Zeek 3.0.0 and beyond, Robin Sommer, Corelight, CTO and Co-Founder (Slides)
  • Baseline the Network with Zeek, Adam Pumphrey, Consultant, Nimbus LLC (Slides)
  • Without U There is No CommUnity, Amber Graner, Zeek Community Director, Corelight (Slides)
  • Zeek - Incident Response and Beyond, Aashish Sharma, Lawrence Berkeley National Lab
  • Encrypted Things: Network Detection and Response in an Encrypted World, TJ Biehle, Sr. Technical Account Manager, Insight (Slides)
  • Lightning Talks (Various presenters)
    • Zeek Based IPS (Slides)
    • Challenge: Zeek on a large amount of low power sensors, Alex Bortok (Slides)
    • Using BRO [Zeek] to tattle on other tools, Patrick Cain, The Cooper-Cain Group. Inc. (Slides)
    • Contributing to Zeek (How to do a Pull Request), Tim Wojtulewicz, Corelight (Slides)
    • Dynamite-NSM, Open-source project for network traffic analysis with Zeek, Suricata, Flow Data and ELK, Oleg Sinitsin, Dynamite.AI (Slides)
    • eZeeKonfigurator - notice config, Michael Dopheide, ESnet (Slides)
    • How I became a Zeeker & Why I Zeek, Jeff Atkinson (Slides)
  • Using Zeek for SSL Research, Johanna Amann, Senior Researcher, ICSI / Corelight / LBL (Slides)

11 October 2019 - ZeekWeek Day 3 - Sessions

  • New Implementation of Zeek Dictionary to use Less Memory, Jason Lu, Senior Staff Software Engineer, Gigamon (Slides)
  • Introduction to Zeek Script Writing, Seth Hall, Corelight, Chief Evangelist and Co-Founder (No slides were used for this talk; live scripting)
  • Visualizing, Analyzing and Filtering Zeek Events using a Graphical Frontend and OpenGL, Nick Skelsey, Security Engineer, Secure Network (Slides) (Demo Vids)

Thoughts on the event

"ZeekWeek 2019 was another great opportunity to catch up with colleagues across both R&E and industry. It's always inspiring to see what people have been doing with Zeek over the last year." ~ Michael Dopheide, ESnet
“Great experience sharing knowledge and collaborating with the community in this year's ZeekWeek, so much useful content and great place to “zeek out” with fellow Zeekers!” ~ Fatema Bannat Wala, Security Engineer, University of Delaware
“ZeekWeek2019 provided a great opportunity to share knowledge in pursuit of defending networks. Without the people, Zeek is just a tool.” ~Keith Lehigh, Indiana University and Chair, Zeek Leadership Team
“We use Zeek, you should too!” ~ Aashish Sharma, Lawrence Berkeley National Lab and Zeek Leadership Team Member
"It's amazing to see everything this community is doing with Zeek." ~ Robin Sommer, Corelight, CTO and Co-Founder; Zeek Leadership Team Member

Many Thanks and Much Appreciation

Zeek events, such as this year’s ZeekWeek, are only possible through the generous support of the Zeek community, its sponsors and hosts. A huge shoutout and “THANK YOU” to all out sponsors and speakers!!

Helpful Links and information:

Getting Involved: If you would like to be part of the Open Source Zeek Community and contribute to the success of the project please sign up for our mailing lists, join our IRC Channel, come to our events, follow the blog and/or Twitter feed. If you’re writing scripts or plugins for Zeek we would love to hear from you! Can’t figure out what your next step should be, just reach out. Together we can find a place for you to actively contribute and be a part of this growing community.

About Zeek (formerly Bro): Zeek is the world’s leading platform for network security monitoring. Flexible, open source, and powered by defenders.

Tuesday, October 8, 2019

ZeekWeek Q&A with the Community: Bricata

by Amber Graner, Zeek Director of Community

As ZeekWeek gets underway, we wanted to find out what’s new among members of the Zeek Community. Accordingly, we had a chance to catch up with the Bricata team.

Bricata is a contributor to the Zeek community, and supporter of ZeekWeek as the exclusive sponsor of the Welcome Reception for the 2019 event.

1. For those who are new to the network security monitoring (NSM) space can you tell people about Bricata?

Bricata: Bricata is laser-focused on empowering security analysts to hunt effectively. The platform provides analysts with the tools they need to adequately respond to network threats and provide comprehensive network protection. Bricata gives security teams the capabilities to do things like:

  • Obtain network visibility quickly to thoroughly understand what’s taking place in their environment
  • Respond to alerts and understand their context. Alerts are triggered by our multiple threat detection engines, including Zeek; Suricata; IOC matching, and AI-based binary conviction
  • Hunt for zero-day threats using Zeek-generated metadata and PCAPs and develop countermeasures against future attacks

From a workflow perspective, Bricata is especially well-suited to threat investigation and hunting. That means the platform provides a streamlined approach to foraging through network data and developing insight. It’s the metadata produced by Zeek that provides the context for investigating alerts and taking action with the platform.

Flexibility is an important principle here. Bricata gives security organizations the flexibility to customize and enrich the network metadata so that it’s meaningful within the context of their specific environments and use cases. In addition, our dashboard and visualization tools can be easily tailored to an individual analyst’s preferences.

2. Why is ZeekWeek and the Zeek Project important to Bricata?
Bricata: ZeekWeek is a time for everyone in the community to get together. We’ve found it to be a very devoted group of people sharing their experiences working with Zeek and sharing how they’ve worked out solutions to difficult, but common challenges.

In the past, we’ve used this opportunity to share successes we’ve had with the Zeek Project in the context of our solution and our customers’ use of Zeek. For example, we previously released a labeling module to the community, which provides a way for analysts to share their knowledge about the environment. Those labels are matched with network data that Zeek is generating, which in turn enables more sophisticated threat detection and network analysis.

We expect to see a lot of focus on machine learning this year with Zeek-produced datasets and particularly how people are optimizing their use and management of it. That’s important because network speeds keep getting faster and unconstrained, Zeek is known to produce a high volume of data.

3. What can attendees expect to learn if they visit your booth at ZeekWeek?

Bricata: Visitors will see just how easy we’ve made it to deploy and use Zeek in their environment. They can stand it up and get usable network visibility very quickly. This allows them to easily incorporate it into their IT infrastructure and security operations.

Secondly, people that haven’t seen the solution in a while will find some of the most recent enhancements we’ve made for our customers interesting. For example, as members of the community know, Zeek can generate a wealth of metadata. While that’s useful, it can also be overwhelming, so we’ve incorporated fine-grain filters that permit security teams to precisely control the Zeek logs they require. This ability prevents the costly processing and storage of unnecessary metadata.

Finally, and this one of the benefits of the community, we’ve adopted the 5-tuple Community ID hash. We’re using it to help consolidate similar alerts under a single grouping as a means to reduce the alert fatigue the SOC can sometimes experience. Bricata is bullish on the Community ID because we see it as an up-and-coming standard that will enable seamless interoperability with other security solutions.

4. What else would you like attendees to know about that I haven't asked you about?
Bricata: Fly-Away kits are one of the initiatives we have that extends beyond the traditional use cases for NSM. Zeek is an integral part and here are a couple examples:

  • We’ve partnered with a solution provider that makes network taps to develop a portable flyaway kit for incident response. This brings visibility to environments that are not properly instrumented, or where the response team is unfamiliar with the environment.
  • We’re continuing to build traction among service providers who provide digital forensics and incident response (DFIR). Their teams are using our platform when deployed to dynamic situations like data breaches, insider threats, or any sort of suspected malicious network activity. It helps incident responders quickly understand what is happening on a network, detect threats and facilitate the incident response process.

* * *
ZeekWeek 2019 attendees interested in learning more about Bricata should look for their display on the exhibition floor. In addition, you can check out their website, and stay in touch on LinkedIn or Twitter.

Friday, October 4, 2019

Zeek, Corelight and Humio help make observability accessible

Guest post by Humio

We’re proud to have Humio on board as the exclusive training sponsor for ZeekWeek 2019. As a thought leader in the observability space, Humio has a deep understanding of making observability accessible, comprehensive, and affordable.

Humio can help you efficiently visualize and get answers from the Zeek log volumes that Corelight sensors generate. By pairing Corelight’s deep network monitoring and logging with Humio’s fast and affordable log management technology, you’ll get accurate answers to critical security and IT questions more quickly and more easily than you thought possible.

Humio shares their thoughts about how the need for comprehensive observability is driving a cultural shift.

Our industry is moving at lightning speed towards distributed service-driven architectures, and engineers are on a quest to improve how they observe their systems as a whole. Adoption of microservices and containerized architectures has elevated the need for developers and operations teams to use observability solutions to correlate events, identify threats, and troubleshoot problems. From a business value point of view, managers want observability solutions that allow them to keep calm when their software infrastructure and services are hit with incidents or failures.

Many organizations adopt a combination of log management, metrics, and tracing solutions for observability across their infrastructure. We have found that just having these tools isn’t enough to ensure that engineering teams are able to reap value from them. A cultural shift is required.

Excerpt from O’Reilly’s Distributed Systems and Observability Book by CindySridharan 
“As my friend Brian Knox, who manages the Observability team at DigitalOcean,
“The goal of an Observability team is not to collect logs, metrics, or traces. It is to
build a culture of engineering based on facts and feedback, and then spread that
culture within the broader organization. 
“The same can be said about observability itself, in that it’s not about logs, metrics,
or traces, but about being data-driven during debugging and using the feedback to
iterate on and improve the product.”

As Brian Knox and Cindy Sridharan mention in the excerpt above, observability is about having an engineering culture that values facts and feedback, “being data driven” during debugging, and using this mindset to iterate, improve, and solve problems.

At Humio, we meet many teams that have yet to access the full value they could get from their log data. This isn’t because they don’t have or want a “data driven” observability engineering culture, but rather that their current log solution restricts them from being able to.

Commonly, teams encounter three restrictions with their log solutions:

1. Volume: Modern organizations generate large amounts of unstructured log data — a lot of time is spent on limiting or deciding what data to send to the system. 
2. Speed: Slow queries and latency between index and search phases take too long. Ultimately, the data isn’t available fast enough. 
3. Simplicity: Logging solutions that are not easy to use, query, deploy, or manage result in limited use or frustration using them.

Data-driven Log Management

Our approach at Humio is to remove these restrictions, so data-driven observability teams can gain more value from their log data. We encourage engineers to send all relevant log data, and for all the data to be accessible. Limiting data based on what a logging solution can handle is restrictive, and often it is the logs that were left out that create frustrating debugging scenarios.

Humio is built to scale linearly, and efficiently store data so users aren’t wasting their compute resources. These days, speed matters, and by using real-time streaming capabilities for querying and dashboards, Humio superpowers live system visibility for engineers. Our CTO, Kresten Krab Thorup, wrote a blog post to explain how Humio scales and handles data.

For data-driven logging to succeed, engineering teams should use it for the value it provides. Humio’s query language and ease of use speeds adoption past just the Ops team to the developer organizations, making it a shared solution for everyone. For example,Lunar Way’s developer-driven ops uses Humio across both its development and operations team.

Observability Site License

Humio’s approach to logging is valuable for both small- and large-volume users. For teams with large logging volumes (multi TB/day), Humio software is available On-Premises at a fixed annual site license price. This enables companies to access large log volumes without volume-based licensing costs or extra manpower required in running complicated cluster logging environments. With this model, organizations can add instances and scale up as their data volumes grow or burst. For observability or infrastructure teams who want to deploy multi-tenant logging infrastructures across teams within an organization, Humio can provide simple pricing.

At Humio, we believe in the value of data-driven logging, and the benefits companies derive from this in their observability stack. With a unique product and simple pricing, Humio is on a mission to bring this value to engineering teams who’ve been struggling until now.

Thursday, October 3, 2019

ZeekWeek 2019 - Thank you to our sponsors

The Zeek Project Leadership Team (LT) would like to thank all of the ZeekWeek 2019 sponsors for their generous support. Without their ongoing support ZeekWeek would not be possible.

ZeekWeek is the most important community event for users, developers, incident responders, threat hunters and architects who rely on the open-source Zeek network security monitor as a critical element in their security stack.

If you want to meet with the Zeek Leadership team, core maintainers or our sponsors, registration is still open.

We look forward to seeing you all and our sponsors in Seattle on 8-11 October.

This year’s sponsors include:




40 GIG

10 GIG

Hosted by: