跳转到主要内容

标签(标签)

资源精选(342) Go开发(108) Go语言(103) Go(99) angular(82) LLM(78) 大语言模型(63) 人工智能(53) 前端开发(50) LangChain(43) golang(43) 机器学习(39) Go工程师(38) Go程序员(38) Go开发者(36) React(33) Go基础(29) Python(24) Vue(22) Web开发(20) Web技术(19) 精选资源(19) 深度学习(19) Java(18) ChatGTP(17) Cookie(16) android(16) 前端框架(13) JavaScript(13) Next.js(12) 安卓(11) 聊天机器人(10) typescript(10) 资料精选(10) NLP(10) 第三方Cookie(9) Redwoodjs(9) ChatGPT(9) LLMOps(9) Go语言中级开发(9) 自然语言处理(9) PostgreSQL(9) 区块链(9) mlops(9) 安全(9) 全栈开发(8) OpenAI(8) Linux(8) AI(8) GraphQL(8) iOS(8) 软件架构(7) RAG(7) Go语言高级开发(7) AWS(7) C++(7) 数据科学(7) whisper(6) Prisma(6) 隐私保护(6) JSON(6) DevOps(6) 数据可视化(6) wasm(6) 计算机视觉(6) 算法(6) Rust(6) 微服务(6) 隐私沙盒(5) FedCM(5) 智能体(5) 语音识别(5) Angular开发(5) 快速应用开发(5) 提示工程(5) Agent(5) LLaMA(5) 低代码开发(5) Go测试(5) gorm(5) REST API(5) kafka(5) 推荐系统(5) WebAssembly(5) GameDev(5) CMS(5) CSS(5) machine-learning(5) 机器人(5) 游戏开发(5) Blockchain(5) Web安全(5) Kotlin(5) 低代码平台(5) 机器学习资源(5) Go资源(5) Nodejs(5) PHP(5) Swift(5) devin(4) Blitz(4) javascript框架(4) Redwood(4) GDPR(4) 生成式人工智能(4) Angular16(4) Alpaca(4) 编程语言(4) SAML(4) JWT(4) JSON处理(4) Go并发(4) 移动开发(4) 移动应用(4) security(4) 隐私(4) spring-boot(4) 物联网(4) nextjs(4) 网络安全(4) API(4) Ruby(4) 信息安全(4) flutter(4) RAG架构(3) 专家智能体(3) Chrome(3) CHIPS(3) 3PC(3) SSE(3) 人工智能软件工程师(3) LLM Agent(3) Remix(3) Ubuntu(3) GPT4All(3) 软件开发(3) 问答系统(3) 开发工具(3) 最佳实践(3) RxJS(3) SSR(3) Node.js(3) Dolly(3) 移动应用开发(3) 低代码(3) IAM(3) Web框架(3) CORS(3) 基准测试(3) Go语言数据库开发(3) Oauth2(3) 并发(3) 主题(3) Theme(3) earth(3) nginx(3) 软件工程(3) azure(3) keycloak(3) 生产力工具(3) gpt3(3) 工作流(3) C(3) jupyter(3) 认证(3) prometheus(3) GAN(3) Spring(3) 逆向工程(3) 应用安全(3) Docker(3) Django(3) R(3) .NET(3) 大数据(3) Hacking(3) 渗透测试(3) C++资源(3) Mac(3) 微信小程序(3) Python资源(3) JHipster(3) 语言模型(2) 可穿戴设备(2) JDK(2) SQL(2) Apache(2) Hashicorp Vault(2) Spring Cloud Vault(2) Go语言Web开发(2) Go测试工程师(2) WebSocket(2) 容器化(2) AES(2) 加密(2) 输入验证(2) ORM(2) Fiber(2) Postgres(2) Gorilla Mux(2) Go数据库开发(2) 模块(2) 泛型(2) 指针(2) HTTP(2) PostgreSQL开发(2) Vault(2) K8s(2) Spring boot(2) R语言(2) 深度学习资源(2) 半监督学习(2) semi-supervised-learning(2) architecture(2) 普罗米修斯(2) 嵌入模型(2) productivity(2) 编码(2) Qt(2) 前端(2) Rust语言(2) NeRF(2) 神经辐射场(2) 元宇宙(2) CPP(2) 数据分析(2) spark(2) 流处理(2) Ionic(2) 人体姿势估计(2) human-pose-estimation(2) 视频处理(2) deep-learning(2) kotlin语言(2) kotlin开发(2) burp(2) Chatbot(2) npm(2) quantum(2) OCR(2) 游戏(2) game(2) 内容管理系统(2) MySQL(2) python-books(2) pentest(2) opengl(2) IDE(2) 漏洞赏金(2) Web(2) 知识图谱(2) PyTorch(2) 数据库(2) reverse-engineering(2) 数据工程(2) swift开发(2) rest(2) robotics(2) ios-animation(2) 知识蒸馏(2) 安卓开发(2) nestjs(2) solidity(2) 爬虫(2) 面试(2) 容器(2) C++精选(2) 人工智能资源(2) Machine Learning(2) 备忘单(2) 编程书籍(2) angular资源(2) 速查表(2) cheatsheets(2) SecOps(2) mlops资源(2) R资源(2) DDD(2) 架构设计模式(2) 量化(2) Hacking资源(2) 强化学习(2) flask(2) 设计(2) 性能(2) Sysadmin(2) 系统管理员(2) Java资源(2) 机器学习精选(2) android资源(2) android-UI(2) Mac资源(2) iOS资源(2) Vue资源(2) flutter资源(2) JavaScript精选(2) JavaScript资源(2) Rust开发(2) deeplearning(2) RAD(2)
SEO Title

category

神经网络修剪和相关资源的精选列表。 灵感来自 awesome-deep-vision、awesome-adversarial-machine-learning、awesome-deep-learning-papers 和 Awesome-NAS。

Table of Contents

Type of Pruning

TypeFWOther

ExplanationFilter pruningWeight pruningother types

2021

TitleVenueTypeCode

A Probabilistic Approach to Neural Network PruningICMLF-

Accelerate CNNs from Three Dimensions: A Comprehensive Pruning FrameworkICMLF-

Group Fisher Pruning for Practical Network CompressionICMLFPyTorch(Author)

On the Predictability of Pruning Across ScalesICMLW-

Towards Compact CNNs via Collaborative CompressionCVPRFPyTorch(Author)

Content-Aware GAN CompressionCVPRFPyTorch(Author)

Permute, Quantize, and Fine-tune: Efficient Compression of Neural NetworksCVPRFPyTorch(Author)

NPAS: A Compiler-aware Framework of Unified Network Pruning andArchitecture Search for Beyond Real-Time Mobile AccelerationCVPRF-

Network Pruning via Performance MaximizationCVPRF-

Convolutional Neural Network Pruning with Structural Redundancy ReductionCVPRF-

Manifold Regularized Dynamic Network PruningCVPRF-

Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic DistillationCVPRFO-

A Gradient Flow Framework For Analyzing Network PruningICLRFPyTorch(Author)

Neural Pruning via Growing RegularizationICLRFPyTorch(Author)

ChipNet: Budget-Aware Pruning with Heaviside Continuous ApproximationsICLRFPyTorch(Author)

Network Pruning That Matters: A Case Study on Retraining VariantsICLRFPyTorch(Author)

Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted NetworkICLRWPyTorch(Author)

Layer-adaptive Sparsity for the Magnitude-based PruningICLRWPyTorch(Author)

Pruning Neural Networks at Initialization: Why Are We Missing the Mark?ICLRW-

Robust Pruning at InitializationICLRW-

2020

TitleVenueTypeCode

HYDRA: Pruning Adversarially Robust Neural NetworksNeurIPSWPyTorch(Author)

Logarithmic Pruning is All You NeedNeurIPSW-

Directional Pruning of Deep Neural NetworksNeurIPSW-

Movement Pruning: Adaptive Sparsity by Fine-TuningNeurIPSWPyTorch(Author)

Sanity-Checking Pruning Methods: Random Tickets can Win the JackpotNeurIPSWPyTorch(Author)

Neuron Merging: Compensating for Pruned NeuronsNeurIPSFPyTorch(Author)

Neuron-level Structured Pruning using Polarization RegularizerNeurIPSFPyTorch(Author)

SCOP: Scientific Control for Reliable Neural Network PruningNeurIPSFPyTorch(Author)

Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement LearningNeurIPSF-

The Generalization-Stability Tradeoff In Neural Network PruningNeurIPSFPyTorch(Author)

Pruning Filter in FilterNeurIPSOtherPyTorch(Author)

Position-based Scaled Gradient for Model Quantization and PruningNeurIPSOtherPyTorch(Author)

Bayesian Bits: Unifying Quantization and PruningNeurIPSOther-

Pruning neural networks without any data by iteratively conserving synaptic flowNeurIPSOtherPyTorch(Author)

EagleEye: Fast Sub-net Evaluation for Efficient Neural Network PruningECCV (Oral)FPyTorch(Author)

DSA: More Efficient Budgeted Pruning via Differentiable Sparsity AllocationECCVF-

DHP: Differentiable Meta Pruning via HyperNetworksECCVFPyTorch(Author)

Meta-Learning with Network PruningECCVW-

Accelerating CNN Training by Pruning Activation GradientsECCVW-

DA-NAS: Data Adapted Pruning for Efficient Neural Architecture SearchECCVOther-

Differentiable Joint Pruning and Quantization for Hardware EfficiencyECCVOther-

Channel Pruning via Automatic Structure SearchIJCAIFPyTorch(Author)

Adversarial Neural Pruning with Latent Vulnerability SuppressionICMLW-

Proving the Lottery Ticket Hypothesis: Pruning is All You NeedICMLW-

Soft Threshold Weight Reparameterization for Learnable SparsityICMLWFPytorch(Author)

Network Pruning by Greedy Subnetwork SelectionICMLF-

Operation-Aware Soft Channel Pruning using Differentiable MasksICMLF-

DropNet: Reducing Neural Network Complexity via Iterative PruningICMLF-

Towards Efficient Model Compression via Learned Global RankingCVPR (Oral)FPytorch(Author)

HRank: Filter Pruning using High-Rank Feature MapCVPR (Oral)FPytorch(Author)

Neural Network Pruning with Residual-Connections and Limited-DataCVPR (Oral)F-

Multi-Dimensional Pruning: A Unified Framework for Model CompressionCVPR (Oral)WF-

DMCP: Differentiable Markov Channel Pruning for Neural NetworksCVPR (Oral)FTensorFlow(Author)

Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network CompressionCVPRFPyTorch(Author)

Few Sample Knowledge Distillation for Efficient Network CompressionCVPRF-

Discrete Model Compression With Resource Constraint for Deep Neural NetworksCVPRF-

Structured Compression by Weight Encryption for Unstructured Pruning and QuantizationCVPRW-

Learning Filter Pruning Criteria for Deep Convolutional Neural Networks AccelerationCVPRF-

APQ: Joint Search for Network Architecture, Pruning and Quantization PolicyCVPRF-

Comparing Rewinding and Fine-tuning in Neural Network PruningICLR (Oral)WFTensorFlow(Author)

A Signal Propagation Perspective for Pruning Neural Networks at InitializationICLR (Spotlight)W-

ProxSGD: Training Structured Neural Networks under Regularization and ConstraintsICLRWTF+PT(Author)

One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum EvaluationICLRW-

Lookahead: A Far-sighted Alternative of Magnitude-based PruningICLRWPyTorch(Author)

Dynamic Model Pruning with FeedbackICLRWF-

Provable Filter Pruning for Efficient Neural NetworksICLRF-

Data-Independent Neural Pruning via CoresetsICLRW-

AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression RatesAAAIF-

DARB: A Density-Aware Regular-Block Pruning for Deep Neural NetworksAAAIOther-

Pruning from ScratchAAAIOther-

Reborn filters: Pruning convolutional neural networks with limited dataAAAIF-

2019

TitleVenueTypeCode

Network Pruning via Transformable Architecture SearchNeurIPSFPyTorch(Author)

Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural NetworksNeurIPSFPyTorch(Author)

Deconstructing Lottery Tickets: Zeros, Signs, and the SupermaskNeurIPSWTensorFlow(Author)

One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizersNeurIPSW-

Global Sparse Momentum SGD for Pruning Very Deep Neural NetworksNeurIPSWPyTorch(Author)

AutoPrune: Automatic Network Pruning by Regularizing Auxiliary ParametersNeurIPSW-

Model Compression with Adversarial Robustness: A Unified Optimization FrameworkNeurIPSOtherPyTorch(Author)

MetaPruning: Meta Learning for Automatic Neural Network Channel PruningICCVFPyTorch(Author)

Accelerate CNN via Recursive Bayesian PruningICCVF-

Adversarial Robustness vs Model Compression, or Both?ICCVWPyTorch(Author)

Learning Filter Basis for Convolutional Neural Network CompressionICCVOther-

Filter Pruning via Geometric Median for Deep Convolutional Neural Networks AccelerationCVPR (Oral)FPyTorch(Author)

Towards Optimal Structured CNN Pruning via Generative Adversarial LearningCVPRFPyTorch(Author)

Centripetal SGD for Pruning Very Deep Convolutional Networks with Complicated StructureCVPRFPyTorch(Author)

On Implicit Filter Level Sparsity in Convolutional Neural NetworksExtension1Extension2CVPRFPyTorch(Author)

Structured Pruning of Neural Networks with Budget-Aware RegularizationCVPRF-

Importance Estimation for Neural Network PruningCVPRFPyTorch(Author)

OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural NetworksCVPRF-

Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture SearchCVPROtherTensorFlow(Author)

Variational Convolutional Neural Network PruningCVPR--

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural NetworksICLR (Best)WTensorFlow(Author)

Rethinking the Value of Network PruningICLRFPyTorch(Author)

Dynamic Channel Pruning: Feature Boosting and SuppressionICLRFTensorFlow(Author)

SNIP: Single-shot Network Pruning based on Connection SensitivityICLRWTensorFLow(Author)

Dynamic Sparse Graph for Efficient Deep LearningICLRFCUDA(3rd)

Collaborative Channel Pruning for Deep NetworksICMLF-

Approximated Oracle Filter Pruning for Destructive CNN Width Optimization githubICMLF-

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis4ICMLWPyTorch(Author)

COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level PruningIJCAIFTensorflow(Author)

2018

TitleVenueTypeCode

Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution LayersICLRFTensorFlow(Author)PyTorch(3rd)

To prune, or not to prune: exploring the efficacy of pruning for model compressionICLRW-

Discrimination-aware Channel Pruning for Deep Neural NetworksNeurIPSFTensorFlow(Author)

Frequency-Domain Dynamic Pruning for Convolutional Neural NetworksNeurIPSW-

Learning Sparse Neural Networks via Sensitivity-Driven RegularizationNeurIPSWF-

Amc: Automl for model compression and acceleration on mobile devicesECCVFTensorFlow(3rd)

Data-Driven Sparse Structure Selection for Deep Neural NetworksECCVFMXNet(Author)

Coreset-Based Neural Network CompressionECCVFPyTorch(Author)

Constraint-Aware Deep Neural Network CompressionECCVWSkimCaffe(Author)

A Systematic DNN Weight Pruning Framework using Alternating Direction Method of MultipliersECCVWCaffe(Author)

PackNet: Adding Multiple Tasks to a Single Network by Iterative PruningCVPRFPyTorch(Author)

NISP: Pruning Networks using Neuron Importance Score PropagationCVPRF-

CLIP-Q: Deep Network Compression Learning by In-Parallel Pruning-QuantizationCVPRW-

“Learning-Compression” Algorithms for Neural Net PruningCVPRW-

Soft Filter Pruning for Accelerating Deep Convolutional Neural NetworksIJCAIFPyTorch(Author)

Accelerating Convolutional Networks via Global & Dynamic Filter PruningIJCAIF-

2017

TitleVenueTypeCode

Pruning Filters for Efficient ConvNetsICLRFPyTorch(3rd)

Pruning Convolutional Neural Networks for Resource Efficient InferenceICLRFTensorFlow(3rd)

Net-Trim: Convex Pruning of Deep Neural Networks with Performance GuaranteeNeurIPSWTensorFlow(Author)

Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain SurgeonNeurIPSWPyTorch(Author)

Runtime Neural PruningNeurIPSF-

Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware PruningCVPRF-

ThiNet: A Filter Level Pruning Method for Deep Neural Network CompressionICCVFCaffe(Author)PyTorch(3rd)

Channel pruning for accelerating very deep neural networksICCVFCaffe(Author)

Learning Efficient Convolutional Networks Through Network SlimmingICCVFPyTorch(Author)

2016

TitleVenueTypeCode

Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman CodingICLR (Best)WCaffe(Author)

Dynamic Network Surgery for Efficient DNNsNeurIPSWCaffe(Author)

2015

TitleVenueTypeCode

Learning both Weights and Connections for Efficient Neural NetworksNeurIPSWPyTorch(3rd)

Related Repo

Awesome-model-compression-and-acceleration

EfficientDNNs

Embedded-Neural-Network

awesome-AutoML-and-Lightweight-Models

Model-Compression-Papers

knowledge-distillation-papers

Network-Speed-and-Compression

 

原文:https://github.com/he-y/Awesome-Pruning

文章链接