## What is/are 3d Deep?

3d Deep - Compared to existing representations used for 3D deep learning, PIFu produces high-resolution surfaces including largely unseen regions such as the back of a person.^{[1]}Residual network was used to build the 3D deep-learning system.

^{[2]}Finally, the prediction results of the proposed 2D and 3D deep models were blended together to boost recognition performance.

^{[3]}In this study, we propose a 3D deep neural network called U-ReSNet, a joint framework that can accurately register and segment medical volumes.

^{[4]}In this paper, we describe a 3D deep residual encoder-decoder CNNS with Squeeze-and-Excitation block for brain tumor segmentation.

^{[5]}In this paper, we propose a method to automatically segment MR prostate image based on 3D deeply supervised FCN with concatenated atrous convolution (3D DSA-FCN).

^{[6]}We aim to harvest the sparsity information of C3D deep features and use it as an additional information to differentiate the normal and anomalous event in a video.

^{[7]}Experimental results of 63 clinically confirmed HCCs with Contrast-enhanced MR demonstrate the superior performance of the proposed framework as follows: (1) 3D deep feature outperforms texture features in the radiomics approach for MVI prediction.

^{[8]}And we use the “3D deep learning” method to train a deep neural network.

^{[9]}The lack of comprehensive analysis makes it difficult to justify deployment of 3D deep learning models in real-world, safety-critical applications.

^{[10]}Since the artifacts caused by few-view reconstruction appear in 3D instead of 2D geometry, a 3D deep network has a great potential for improving the image quality in a data driven fashion.

^{[11]}3D deep learning performance depends on object representation and local feature extraction.

^{[12]}Then, a data-driven method is required to process these 3D data, which brings a strong demand of 3D Deep Learning in 3D data.

^{[13]}In this work, we propose a 3D deep learning based super-resolution (SR) framework to reconstruct the isotropic high-resolution MR images from multiple anisotropic scans.

^{[14]}The improvement of conversion results employing a 3D deep learning approach is evaluated.

^{[15]}In this paper, we propose an automatic and efficient algorithm based on a 3D deep fully convolutional network for identifying implanted seeds in CT images.

^{[16]}Specifically, we propose two schemes: (1) A 1D deep capsule network is designed for spectral classification, as a combination of principal component analysis, CNN, and the Conv-Capsule network, and (2) a 3D deep capsule network is designed for spectral-spatial classification, as a combination of extended multi-attribute profiles, CNN, and the Conv-Capsule network.

^{[17]}Recent progresses in 3D deep learning has shown that it is possible to design special convolution operators to consume point cloud data.

^{[18]}We find that point clouds provide a richer signal than RGB images for learning obstacle avoidance, motivating the use (and continued study) of 3D deep learning models for embodied navigation.

^{[19]}The domain of 3D Deep learning is growing rapidly as 3D sensor cost plunges and the perception capabilities these sensors can provide is continuously being extended.

^{[20]}We present a 3D deep regression neural network to automatically generate the collateral images from dynamic susceptibility contrast-enhanced magnetic resonance perfusion (DSC-MRP) in acute ischemic stroke.

^{[21]}The mitigation of these false alarms and recent advancement of 3D deep neural network in video action recognition task collectively give us motivation to exploit the 3D ResNet in our proposed method, which helps to extract spatial-temporal features from the videos.

^{[22]}A 3D deep neural network was trained and tested on this unique OCT optic nerve head dataset from Stanford.

^{[23]}In this paper, we propose a 3D deeply supervised fully-convolutional-network (FCN) with dilated convolution kernel to automatically segment prostate in CT images.

^{[24]}3D Deep-Learning methods are promising to tackle the issue of the urban objects detection inside a LiDAR cloud.

^{[25]}We developed a 3D deep convolutional network algorithm for automated segmentation of CT images to build realistic computational phantoms.

^{[26]}A 3D deep neural network is implemented to differentiate tumor from normal tissues, subsequentially, a second 3D deep neural network is developed for tumor classification.

^{[27]}However, they are ignorant of the understanding and interpretation of decisions made by these 3D deep learning models.

^{[28]}To further improve the estimation accuracy, we propose applying the 3D deep network architectures and leveraging the complete hand surface as intermediate supervision for learning 3D hand pose from depth images.

^{[29]}Many 2D and 3D deep learning models have achieved state-of-the-art segmentation performance on 3D biomedical image datasets.

^{[30]}There are two modules in the proposed scheme including (1) low-resolution displacement vector field (LR-DVF) estimator, which uses a 3D deep convolutional network (ConvNet) to directly estimate the voxel-wise displacement (a 3D vector field) between PET/CT images, and (2) 3D spatial transformer and re-sampler, which warps the PET images to match the anatomical structures in the CT images using the estimated 3D vector field.

^{[31]}

## convolutional neural network

This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN).^{[1]}This paper for the first time presents a 3D deep convolutional neural network using attention mechanism for survival prediction from multiparametric MRI in glioblastoma (GBM) patients.

^{[2]}We propose to use 3D Deep Convolutional Neural Network (3D-DCNN) architecture to learn the embedding model using distance-based triplet-loss similarity metric.

^{[3]}To predict lung nodule malignancy with a high sensitivity and specificity, we propose a fusion algorithm that combines handcrafted features (HF) into the features learned at the output layer of a 3D deep convolutional neural network (CNN).

^{[4]}Here, we deploy a novel 3D deep convolutional neural network (DCNN) that directly assimilates the hyperspectral data.

^{[5]}In recent years, 3D deep convolutional neural networks (3D CNN) have been successfully used for segmentation of knee bones and cartilage.

^{[6]}We designed a nine‐layer 3D deep convolutional neural network (CNN) that takes as input a gridded box with the atomic coordinates and types around a residue.

^{[7]}In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed.

^{[8]}In this paper, we present a 3D Deep Convolutional Neural Network to predict the hidden parts of objects from a single-view and to accomplish recovering the complete shape of them.

^{[9]}Therefore, this paper develops a 3D deep convolutional neural network on 18F-FDG PET images for the automated early diagnosis.

^{[10]}Here we present pyLattice_deepLearning, an image segmentation module based on 3D deep convolutional neural networks (3D U-Nets [3]).

^{[11]}

## Novel 3d Deep

This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN).^{[1]}Here, we deploy a novel 3D deep convolutional neural network (DCNN) that directly assimilates the hyperspectral data.

^{[2]}In this paper, we proposed a novel 3D deep learning model for object localization and object bounding boxes estimation.

^{[3]}In this context, we present a novel 3D deep model for dynamic spatiotemporal representation of faces in videos.

^{[4]}

## 3d deep learning

We present a new 3D deep learning hand pose estimation network for an unordered point cloud.^{[1]}Compared to existing representations used for 3D deep learning, PIFu produces high-resolution surfaces including largely unseen regions such as the back of a person.

^{[2]}We benchmark four state-of-the-art 3D deep learning algorithms for fine-grained semantic segmentation and three baseline methods for hierarchical semantic segmentation.

^{[3]}And we use the “3D deep learning” method to train a deep neural network.

^{[4]}The lack of comprehensive analysis makes it difficult to justify deployment of 3D deep learning models in real-world, safety-critical applications.

^{[5]}3D deep learning performance depends on object representation and local feature extraction.

^{[6]}Then, a data-driven method is required to process these 3D data, which brings a strong demand of 3D Deep Learning in 3D data.

^{[7]}In this work, we propose a 3D deep learning based super-resolution (SR) framework to reconstruct the isotropic high-resolution MR images from multiple anisotropic scans.

^{[8]}The improvement of conversion results employing a 3D deep learning approach is evaluated.

^{[9]}Recent progresses in 3D deep learning has shown that it is possible to design special convolution operators to consume point cloud data.

^{[10]}We find that point clouds provide a richer signal than RGB images for learning obstacle avoidance, motivating the use (and continued study) of 3D deep learning models for embodied navigation.

^{[11]}The domain of 3D Deep learning is growing rapidly as 3D sensor cost plunges and the perception capabilities these sensors can provide is continuously being extended.

^{[12]}In this paper, we proposed a novel 3D deep learning model for object localization and object bounding boxes estimation.

^{[13]}Here we present a performance evaluation of publicly available implementations of established 3D Deep Learning architectures for semantic segmentation (namely DeepMedic, 3D U-Net, FCN), with a particular focus on identifying a skull-stripping approach that performs well on brain tumor scans, and also has a low computational footprint.

^{[14]}Here, we proposed 3D deep learning network based on dual generative adversarial network (dual-GAN) framework for recovering HR volume images from LR volume images.

^{[15]}As 3D deep learning methods emerged, the view-based approaches have gained considerable success in object classification.

^{[16]}A customized 3D deep learning architecture based on dynamic patch-based sampling demonstrates high performance in detection of complete ACL tears with over 96% test set accuracy.

^{[17]}However, they are ignorant of the understanding and interpretation of decisions made by these 3D deep learning models.

^{[18]}Many 2D and 3D deep learning models have achieved state-of-the-art segmentation performance on 3D biomedical image datasets.

^{[19]}The second proposed methods uses object detection architecture, which combines the 2D object detectors and a contemporary 3D deep learning techniques.

^{[20]}

## 3d deep convolutional

This paper for the first time presents a 3D deep convolutional neural network using attention mechanism for survival prediction from multiparametric MRI in glioblastoma (GBM) patients.^{[1]}We propose to use 3D Deep Convolutional Neural Network (3D-DCNN) architecture to learn the embedding model using distance-based triplet-loss similarity metric.

^{[2]}To predict lung nodule malignancy with a high sensitivity and specificity, we propose a fusion algorithm that combines handcrafted features (HF) into the features learned at the output layer of a 3D deep convolutional neural network (CNN).

^{[3]}Here, we deploy a novel 3D deep convolutional neural network (DCNN) that directly assimilates the hyperspectral data.

^{[4]}In recent years, 3D deep convolutional neural networks (3D CNN) have been successfully used for segmentation of knee bones and cartilage.

^{[5]}Here, we propose a new end-to-end 3D deep convolutional neural net (DCNN), called NoduleNet, to solve nodule detection, false positive reduction and nodule segmentation jointly in a multi-task fashion.

^{[6]}We designed a nine‐layer 3D deep convolutional neural network (CNN) that takes as input a gridded box with the atomic coordinates and types around a residue.

^{[7]}In this paper, a computer-aided lung nodule detection system using 3D deep convolutional neural networks (CNNs) is developed.

^{[8]}In this paper, we present a 3D Deep Convolutional Neural Network to predict the hidden parts of objects from a single-view and to accomplish recovering the complete shape of them.

^{[9]}We developed a 3D deep convolutional network algorithm for automated segmentation of CT images to build realistic computational phantoms.

^{[10]}Therefore, this paper develops a 3D deep convolutional neural network on 18F-FDG PET images for the automated early diagnosis.

^{[11]}There are two modules in the proposed scheme including (1) low-resolution displacement vector field (LR-DVF) estimator, which uses a 3D deep convolutional network (ConvNet) to directly estimate the voxel-wise displacement (a 3D vector field) between PET/CT images, and (2) 3D spatial transformer and re-sampler, which warps the PET images to match the anatomical structures in the CT images using the estimated 3D vector field.

^{[12]}Here we present pyLattice_deepLearning, an image segmentation module based on 3D deep convolutional neural networks (3D U-Nets [3]).

^{[13]}

## 3d deep neural

This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN).^{[1]}In this study, we propose a 3D deep neural network called U-ReSNet, a joint framework that can accurately register and segment medical volumes.

^{[2]}We approach the segmentation task for two-photon brain angiograms using a fully convolutional 3D deep neural network.

^{[3]}The mitigation of these false alarms and recent advancement of 3D deep neural network in video action recognition task collectively give us motivation to exploit the 3D ResNet in our proposed method, which helps to extract spatial-temporal features from the videos.

^{[4]}A 3D deep neural network was trained and tested on this unique OCT optic nerve head dataset from Stanford.

^{[5]}A 3D deep neural network is implemented to differentiate tumor from normal tissues, subsequentially, a second 3D deep neural network is developed for tumor classification.

^{[6]}"Making 3D deep neural networks debuggable".

^{[7]}

## 3d deep model

Finally, the prediction results of the proposed 2D and 3D deep models were blended together to boost recognition performance.^{[1]}In this context, we present a novel 3D deep model for dynamic spatiotemporal representation of faces in videos.

^{[2]}

## 3d deep feature

We aim to harvest the sparsity information of C3D deep features and use it as an additional information to differentiate the normal and anomalous event in a video.^{[1]}Experimental results of 63 clinically confirmed HCCs with Contrast-enhanced MR demonstrate the superior performance of the proposed framework as follows: (1) 3D deep feature outperforms texture features in the radiomics approach for MVI prediction.

^{[2]}

## 3d deep network

Since the artifacts caused by few-view reconstruction appear in 3D instead of 2D geometry, a 3D deep network has a great potential for improving the image quality in a data driven fashion.^{[1]}To further improve the estimation accuracy, we propose applying the 3D deep network architectures and leveraging the complete hand surface as intermediate supervision for learning 3D hand pose from depth images.

^{[2]}