Abstract:Deploying deep convolutional neural networks (CNNs) on resource-constrained devices presents significant challenges due to their high computational demands and rigid, static architectures. To overcome these limitations, this thesis explores methods for enabling CNNs to dynamically adjust their computational complexity based on available hardware resources. We introduce adaptive CNN architectures capable of scaling their capacity at runtime, thus efficiently balancing performance and resource utilization. To achieve this adaptability, we propose a structured pruning and dynamic re-construction approach that creates nested subnetworks within a single CNN model. This approach allows the network to dynamically switch between compact and full-sized configurations without retraining, making it suitable for deployment across varying hardware platforms. Experiments conducted across multiple CNN architectures including VGG-16, AlexNet, ResNet-20, and ResNet-56 on CIFAR-10 and Imagenette datasets demonstrate that adaptive models effectively maintain or even enhance performance under varying computational constraints. Our results highlight that embedding adaptability directly into CNN architectures significantly improves their robustness and flexibility, paving the way for efficient real-world deployment in diverse computational environments.
Abstract:Deep Neural Network (DNN) Inference in Edge Computing, often called Edge Intelligence, requires solutions to insure that sensitive data confidentiality and intellectual property are not revealed in the process. Privacy-preserving Edge Intelligence is only emerging, despite the growing prevalence of Edge Computing as a context of Machine-Learning-as-a-Service. Solutions are yet to be applied, and possibly adapted, to state-of-the-art DNNs. This position paper provides an original assessment of the compatibility of existing techniques for privacy-preserving DNN Inference with the characteristics of an Edge Computing setup, highlighting the appropriateness of secret sharing in this context. We then address the future role of model compression methods in the research towards secret sharing on DNNs with state-of-the-art performance.