Deep neural networks (DNNs) are a type of machine learning technique commonly usedto achieve state-of-the-art accuracy results for many applications, including image classification, speech recognition, and natural language translation. State-of-the-art DNNS rely on multi-layer ("deep") neural networks, which require more and more computations. To deal with the growing computational demand, specialized hardware accelerators are being utilized to improve performance and energy efficiency. To manage design costs, System-on-Chip (SoC) designers integrate hardware accelerators from external third parties; this comes at the expense of trust, since outsourced accelerators could have malicious modifications, also known as hardware trojans. Hardware trojans can cause the accelerator to misbehave in the field either by changing the result of the computation and returning an incorrect classification or by leaking sensitive information. Due to the wide range of DNN applications, it is imperative to secure computations for safety-critical and/or sensitive applications, such as autonomous driving and healthcare. Deep learning applications have shown vulnerability to a range of fault attacks, potentially triggered by trojans. This dissertation undertakes the challenges of securing DNN inference and introduces approaches to protect against hardware trojans. It considers techniques that guarantee unconditional security and augments them with hardware design principles and insights to propose solutions with low area and latency overheads. This dissertation presents four approaches in total. The first three frameworks provide probably correct computations by leveraging interactive proof protocols and then the last framework tackles privacy with an approach that builds on secure multiparty computations for DNN inference: (1) VeritAcc focuses on providing integrity to matrix multiplications in embedded systems; (2) SafeTPU focuses on DNNs including enhancing a convolution interactive proof protocol; (3) TrapezeHW proposes a framework for convolutional neural network (CNN)-specific protocols running in an untrusted cloud environment; and (4) PACMANN, which is an approach providing privacy to DNNs on untrusted hardware.