The Nanoscale World

HSDC data conversion (Matlab) q

rated by 0 users
Answered (Verified) This post has 2 verified answers | 9 Replies | 4 Followers

Top 500 Contributor
4 Posts
Points 48
palli posted on Tue, Sep 20 2011 11:52 PM

Hi all

I trying to export some HSDC data (version 8.1) to Matlab.

Following the tips in an earlier post on this forum (http://nanoscaleworld.bruker-axs.com/nanoscaleworld/forums/p/372/561.aspx) I was able to work out how to convert the raw deflection data in the HSDC file to Matlab format and then scale from LSB value to force.  

But there are still a couple of things bothering me. Firstly, the data is 1235008 long (corresponding to \Number of samples: 1235008 in the HSDC header info) but when I open the data in Nanoscope analysis it shows 2411 curves * 512 data points = 1234432 - in other words there is a difference of 576 data points.

This probably relates to another problem I can't figure out and that is where in the string of 1235008 data points does the first force start from? At n = 1, or n = 577 or somewhere else? In other words, if I open curve no 1 in nanoscope analysis where along the string of 12350008 data points in my matlab file should I find the corresponding data from force curve no 1 so I can check that my conversions are all OK (they look ok-ish).

Finally it is worth mentioning that every 512-th data point (starting at n = 465 in my string of 1235008 data points) in this string is associated with a spike (LSB = -32768). I simply removed these and replaced with the average of the neighbouring peaks but I have the feeling that is probably not the right thing to do.

Thanks in advance for your comments / suggestions.

Palli

----

Dr. Pall Thordarson
Senior Lecturer

School of Chemistry
The University of New South Wales (UNSW)
Sydney
2052 NSW
Australia
---------------------
Tel: +61-(0)2-9385-4478
Fax: +61-(0)2-9385-6141
Web:
www.chem.unsw.edu.au/research/groups/thordarson

 

 

  • | Post Points: 12

Answered (Verified) Verified Answer

Top 10 Contributor
288 Posts
Points 3,905
Bruker Employee
Verified by palli

Palli,

You are on the right track for our recommended best practice on getting QNM force curves into 3rd party SW (such as Matlab).

Use NanoScope Analysis function Analysis->QNM HSDC-ForceCurve-Image. Select Multiple. Set Cursors. Then export. (You can put all the curves between the cursors). Take the option to export two evenly spaced curves in each file, so you get data for both deflection and position in individual files for each curve. You can then import these files into Matlab using a script.

When you do this, you will see the lever movement is sinusoidal (not a traditional ramp), and this is what explains the bell-shape in your process. Your other issue comes from a phase delay in the acquisition that NanoScope Analysis automatically corrects. I don’t have the process for doing that manually; our recommended approach is to use the NanoScope Analysis function.

Hope this helps,
Steve

  • | Post Points: 13
Top 10 Contributor
280 Posts
Points 6,221
Bruker Employee
Verified by palli

After some more discussion with Palli and review of the matlab code, it appears that the raw data is read from binary and ordered correctly, but it is not scaled. 

Coincidentally the scaling factors are very close to 1.0 with this data, and that caused a lot of confusion for both of us.  Here is how to scale the data from this force distance file (created by exporting from PFQNM data):FrcExport-Palli_1.123 

The Force data is in the first channel and the (sinusoidal) Z position is in the second channel.  Both channels need to be scaled using the following information from the header file:

\@Sens. DeflSens: V 46.645 nm/V

\@Sens. ZsensSens: V 2656.67 nm/V

\*Ciao force image list

\Spring Constant: 0.0549566

\@4:Z scale: V [Sens. DeflSens] (0.000375 V/LSB) 2.500000 V

\*Ciao force image list

\@4:Z scale: V [Sens. ZsensSens] (0.0003750000 V/LSB) 6.965273 V

 

The scaling factor for force is then: (.0549566 N/m)*(46.645 nm/V)*(0.000375 V/LSB)=0.0009612 nN/LSB or 0.9612pN/LSB which is different from 1.0 by 4%

 

The scaling factor for Z position is: (2656.67 nm/V)*(0.000375 V/LSB)=0.9963 nm/LSB which is very close to 1.0 in this case!

 

  • | Post Points: 11

All Replies

Top 10 Contributor
280 Posts
Points 6,221
Bruker Employee

It sounds like your HSDC data set contains Peak Force Tapping data.  Is that right?  If so, what is your modulation frequency?  Peak Force Tapping HSDCs usually use the 500KHz ADC, so you can work out how many points per curve from that.  The spikes are added artificially to help you find the start of each tap.

--Bede

  • | Post Points: 12
Top 500 Contributor
4 Posts
Points 48
palli replied on Wed, Sep 21 2011 8:53 PM

Hey Bede

Thanks for your quick reply. Yes, it is Peak Force Data. Yes the parameter High Speed Data Channel Rate = 500000. But don't quite follow how that explains my problem - how does that number relate to number of samples (1250008) vs. number of points per curve (which in the nanoscope software seems to be 512). 

With the spikes though, there are exactly 2412 spikes and hence 2411 spaces between them, each with 511 data points. If I replace the spike with a neighbouring value I get 2411 curves with 512 points each. The trouble is that the spikes don't seem to fit with being at the start of an approach curve or any other notable part of the whole approach / retract process. In my particular example of 1235008 points, the first spike comes at n = 465. But visual inspection of the data would suggest the approach curves starts at ca n =64.  So where do I find the info that tells me where the approach curve starts? 

I should add to all of this two other issues I have with the whole HSDC peak force data conversion:

i) I can't seem to get the z-value data right either. The nanoscope software shows a bell shaped curve (the Peakforce "swing") but all I can find in the guidelines about file conversion suggest that the z-values are obtained from a simple linear function (Nanoscope user guide 8.10, rev C, page 175). So how are we meant to be reconstruct the "bell curve" from the header info?

ii) I also exported all 2411 curves via Nanoscope as x-z data and the imported them back via a batch process into Matlab (quite a tedious process but it works). In this case I can see the "bell shaped" z-values and after flipping the deflection values get something very similar to the native nanoscope values. But the deflection values are all ca 4% off and I can't figure out why?

Cheers

Palli

  • | Post Points: 12
Top 10 Contributor
288 Posts
Points 3,905
Bruker Employee
Verified by palli

Palli,

You are on the right track for our recommended best practice on getting QNM force curves into 3rd party SW (such as Matlab).

Use NanoScope Analysis function Analysis->QNM HSDC-ForceCurve-Image. Select Multiple. Set Cursors. Then export. (You can put all the curves between the cursors). Take the option to export two evenly spaced curves in each file, so you get data for both deflection and position in individual files for each curve. You can then import these files into Matlab using a script.

When you do this, you will see the lever movement is sinusoidal (not a traditional ramp), and this is what explains the bell-shape in your process. Your other issue comes from a phase delay in the acquisition that NanoScope Analysis automatically corrects. I don’t have the process for doing that manually; our recommended approach is to use the NanoScope Analysis function.

Hope this helps,
Steve

  • | Post Points: 13
Top 500 Contributor
4 Posts
Points 48
palli replied on Thu, Sep 22 2011 5:21 PM

Hi Steve

Thanks for your answer - yes it does help!

I can see your point - the export multiple files and then batch import to Matlab is probably best option in the end. The only downside is that when you export a 4 Mb HSDC file as 2000 curves you get over 100 Mb of data which then goes back to 1.5 Mb once it has been converted to a Matlab file. I guess that is why I was hoping I could do this directly from the 4 Mb HSDC file but it doesn't seem to be a lot of little issues that I would need to work out to do that (I am not really a programmer!).

With regards to why the deflection values in the ascii exported files are slightly different (4%) than the one that come from exporting the curves from nanoscope and opening them back up in Matlab via a script I am not sure I understand what you mean by phase delay having anything to do with that or as you say, how one could automatically correct for it. I did though find this in the header info of the individual files that HSDC creates when I export them:

\Deflection Sensitivity Correction: 1.08

Does that have anything to do with this small issue? (the number doesn't match though!).

Alternatively, I could just manually correct for this after the batch import because the error is always the same. Perhaps that is the simplest solution?

Cheers

Palli

  • | Post Points: 12
Top 10 Contributor
280 Posts
Points 6,221
Bruker Employee

Hi Palli,

I'm Glad to see that you have a solution.  With regard to the 4% issue:

  1. Deflection sensitivity correction is only used when doing thermal tune calibrations of spring constant, so that is not causing the difference
  2. I'm guessing that the difference is coming from using the wrong sensitivity (there are about four that are all very similar: deflection sens, amplitude sens, deflection error sens, etc.). 

Can you provide a hsdc file and your result? your code might be useful as well.  You can post it here or email me at firstname dot lastname at bruker-nano.com

--Bede Pittenger 

  • | Post Points: 12
Top 500 Contributor
4 Posts
Points 48
palli replied on Sun, Sep 25 2011 7:41 PM

Hi Bede

Sure - I am happy to share the code - it is largely based on Jaco de Groot and Macerano Blanco: http://www.mathworks.com/matlabcentral/fileexchange/11515-open-nanoscope-6-afm-images .

Can you add the zip file from my file folder to this post (I can't seem to be able to)? It incluees the two required file for batch processign HSDC peakforce data.

N.b., I think there is also a small systematic error in the z-axis values that I get with this code.

I will email you the other files..

Cheers

Palli

 

  • | Post Points: 12
Top 10 Contributor
288 Posts
Points 3,905
Bruker Employee

Palli,

I uploaded your file here: batchHSDC.zip

Top 10 Contributor
280 Posts
Points 6,221
Bruker Employee

Hi Palli,

I do not see any code in these files that would read the header to find the deflection scale and then multiply it by the data points to scale your deflection values.  Did you find your issue?  Or maybe we still don't have the right matlab files?  Or I'm confused about where you see the 4% difference?

--Bede

  • | Post Points: 10
Top 10 Contributor
280 Posts
Points 6,221
Bruker Employee
Verified by palli

After some more discussion with Palli and review of the matlab code, it appears that the raw data is read from binary and ordered correctly, but it is not scaled. 

Coincidentally the scaling factors are very close to 1.0 with this data, and that caused a lot of confusion for both of us.  Here is how to scale the data from this force distance file (created by exporting from PFQNM data):FrcExport-Palli_1.123 

The Force data is in the first channel and the (sinusoidal) Z position is in the second channel.  Both channels need to be scaled using the following information from the header file:

\@Sens. DeflSens: V 46.645 nm/V

\@Sens. ZsensSens: V 2656.67 nm/V

\*Ciao force image list

\Spring Constant: 0.0549566

\@4:Z scale: V [Sens. DeflSens] (0.000375 V/LSB) 2.500000 V

\*Ciao force image list

\@4:Z scale: V [Sens. ZsensSens] (0.0003750000 V/LSB) 6.965273 V

 

The scaling factor for force is then: (.0549566 N/m)*(46.645 nm/V)*(0.000375 V/LSB)=0.0009612 nN/LSB or 0.9612pN/LSB which is different from 1.0 by 4%

 

The scaling factor for Z position is: (2656.67 nm/V)*(0.000375 V/LSB)=0.9963 nm/LSB which is very close to 1.0 in this case!

 

  • | Post Points: 11
Page 1 of 1 (10 items) | RSS
Copyright (c) 2011 Bruker Instruments