Matches in SemOpenAlex for { <https://semopenalex.org/work/W2995776042> ?p ?o ?g. }
Showing items 1 to 75 of
75
with 100 items per page.
- W2995776042 startingPage "190006" @default.
- W2995776042 abstract "Two problems exist in traditional multi-view geometry method to obtain the three-dimensional structure of the scene. First, the mismatching of the feature points caused by the blurred image and low texture, which reduces the accuracy of reconstruction; second, as the information obtained by monocular camera is lack of scale, the reconstruction results can only determine the unknown scale factor, and cannot get accurate scene structure. This paper proposes a method of equal-scale motion restoration structure based on deep learning. First, the convolutional neural network is used to obtain the depth information of the image; then, to restore the scale information of the monocular camera, an inertial measurement unit (IMU) is introduced, and the acceleration and angular velocity acquired by the IMU and the camera position acquired by the ORB-SLAM2 are demonstrated. The pose is coordinated in both time domain and frequency domain, and the scale information from the monocular camera is acquired in the frequency domain; finally, the depth information of the image and the camera pose with the scale factor are merged to reconstruct the three-dimensional structure of the scene. Experiments show that the monocular image depth map obtained by the Depth CNN network solves the problem that the output image of the multi-level convolution pooling operation has low resolution and lacks important feature information, and the absolute value error reaches 0.192, and the accuracy rate is up to 0.959. The multi-sensor fusion method can achieve a scale error of 0.24 m in the frequency domain, which is more accurate than that of the VIORB method in the frequency domain. The error between the reconstructed 3D model and the real size is about 0.2 m, which verifies the effectiveness of the proposed method." @default.
- W2995776042 created "2019-12-26" @default.
- W2995776042 creator A5013630071 @default.
- W2995776042 creator A5029576208 @default.
- W2995776042 creator A5034550985 @default.
- W2995776042 creator A5055818111 @default.
- W2995776042 creator A5075510757 @default.
- W2995776042 date "2019-12-01" @default.
- W2995776042 modified "2023-09-24" @default.
- W2995776042 title "Equal-scale structure from motion method based on deep learning" @default.
- W2995776042 doi "https://doi.org/10.12086/oee.2019.190006" @default.
- W2995776042 hasPublicationYear "2019" @default.
- W2995776042 type Work @default.
- W2995776042 sameAs 2995776042 @default.
- W2995776042 citedByCount "0" @default.
- W2995776042 crossrefType "journal-article" @default.
- W2995776042 hasAuthorship W2995776042A5013630071 @default.
- W2995776042 hasAuthorship W2995776042A5029576208 @default.
- W2995776042 hasAuthorship W2995776042A5034550985 @default.
- W2995776042 hasAuthorship W2995776042A5055818111 @default.
- W2995776042 hasAuthorship W2995776042A5075510757 @default.
- W2995776042 hasConcept C10161872 @default.
- W2995776042 hasConcept C138885662 @default.
- W2995776042 hasConcept C146159030 @default.
- W2995776042 hasConcept C154945302 @default.
- W2995776042 hasConcept C158829959 @default.
- W2995776042 hasConcept C2776401178 @default.
- W2995776042 hasConcept C31972630 @default.
- W2995776042 hasConcept C41008148 @default.
- W2995776042 hasConcept C41895202 @default.
- W2995776042 hasConcept C65909025 @default.
- W2995776042 hasConcept C79061980 @default.
- W2995776042 hasConcept C81363708 @default.
- W2995776042 hasConceptScore W2995776042C10161872 @default.
- W2995776042 hasConceptScore W2995776042C138885662 @default.
- W2995776042 hasConceptScore W2995776042C146159030 @default.
- W2995776042 hasConceptScore W2995776042C154945302 @default.
- W2995776042 hasConceptScore W2995776042C158829959 @default.
- W2995776042 hasConceptScore W2995776042C2776401178 @default.
- W2995776042 hasConceptScore W2995776042C31972630 @default.
- W2995776042 hasConceptScore W2995776042C41008148 @default.
- W2995776042 hasConceptScore W2995776042C41895202 @default.
- W2995776042 hasConceptScore W2995776042C65909025 @default.
- W2995776042 hasConceptScore W2995776042C79061980 @default.
- W2995776042 hasConceptScore W2995776042C81363708 @default.
- W2995776042 hasIssue "12" @default.
- W2995776042 hasLocation W29957760421 @default.
- W2995776042 hasOpenAccess W2995776042 @default.
- W2995776042 hasPrimaryLocation W29957760421 @default.
- W2995776042 hasRelatedWork W1147166528 @default.
- W2995776042 hasRelatedWork W2028909057 @default.
- W2995776042 hasRelatedWork W2040112476 @default.
- W2995776042 hasRelatedWork W2292391751 @default.
- W2995776042 hasRelatedWork W2519469348 @default.
- W2995776042 hasRelatedWork W2526492353 @default.
- W2995776042 hasRelatedWork W2580376702 @default.
- W2995776042 hasRelatedWork W2767430525 @default.
- W2995776042 hasRelatedWork W2898741471 @default.
- W2995776042 hasRelatedWork W2904323142 @default.
- W2995776042 hasRelatedWork W2954307577 @default.
- W2995776042 hasRelatedWork W2963656298 @default.
- W2995776042 hasRelatedWork W2972529580 @default.
- W2995776042 hasRelatedWork W2980356813 @default.
- W2995776042 hasRelatedWork W3006581434 @default.
- W2995776042 hasRelatedWork W3127104920 @default.
- W2995776042 hasRelatedWork W3197698081 @default.
- W2995776042 hasRelatedWork W2931363274 @default.
- W2995776042 hasRelatedWork W3095805630 @default.
- W2995776042 hasRelatedWork W3138179155 @default.
- W2995776042 hasVolume "46" @default.
- W2995776042 isParatext "false" @default.
- W2995776042 isRetracted "false" @default.
- W2995776042 magId "2995776042" @default.
- W2995776042 workType "article" @default.