More

Updating attribute table from an Excel file using Python


I am kinda new to Python, please excuse for such a trivial questions. There is a shapefile (let's call it Materials.shp). I have an attribute table with different fields (for example, windows, doors, etc… ). The script I am trying to write needs to update the values for those fields. The new values for those fields I get are from an Excel file. Is it possible to do that? What is the easiest way for the script to update the values of these cells? I am using ArcGIS for Desktop.


There are several ways to read the Excel file with Python. It depends on the version of the excel file. I used xlrd in the past and it was really simple.

In this page: http://www.python-excel.org/ There's a list of modules that you can try to read the data in the spreadsheet. Documentation for each one is included there. You might find this useful at least for that part of your task.


How to Update an SQL Table from Excel

If you want to easily let your non-technical users update and manage SQL Server data from Excel,click here to download the SQL Spreads Excel Add-In.

SQL Spreads solves some common data management problems for Microsoft SQL Server. It makes it fast and simple to update an SQL table from an Excel spreadsheet. And it gives you the control you need to manage data entered by various users on a collaborative team.

End users love working in Excel

End users love working in Excel. They know the tool, and they are free to do what they want. That’s the heart of the much-loved Excel application, but also the start of problems for the people taking care of the data. The freedom to add cells and enter “whatever-you-like” values causes huge problems when trying to store and summarize the data in a structured way.

Updating or collecting some “not-available-in-our-systems” data from colleagues is often done by mailing out some Excel file or putting a spreadsheet on a file share.

When users update data in an Excel spreadsheet that should be saved or update in an SQL table, problems like these usually occur:

  • Cells in the spreadsheet can contain invalid data types.
  • There will be problems when users change the layout of the sheet.
  • Difficulties to keep track of previous versions of the Excel spreadsheet.
  • Hard to track who has changed a specific value in a sheet.
  • Troublesome to extract Excel data (with a tool such as SSIS).
  • There can be a delay of several hours between when a user enters the figures and when they appear in the database.

In finance, IT and other fields, structured data is a vital part of the operations. In those fields, you can — literally in minutes — let your end-users update data in structured SQL tables themselves – using Excel. No coding experience or extensive training is necessary.

Here’s information on how you can use SQL Spreads, an Microsoft Excel Add-In, to efficiently and accurately update an SQL Server database from Excel. I will show how to easily bring in your SQL Server tables into Excel for easy updating/management. Then show you how to share the document with your end users and how to keep track of data quality.

How to Update an SQL Table from Excel

To set up an Excel document to work with the data in an SQL Server table, follow these few simple steps:

  1. Download and install the SQL Spreads Excel Add-In.
  2. Go to the SQL Spreads tab in Excel and select Design mode.
  3. A list of databases will appear on the right. Chose the database you are using and select an SQL table to update from Excel.
  4. From the Columns tab you can fine-tune how your table is presented in Excel. You can select the columns you want to update, rearrange them into the order you prefer, and change their names if desired.
  5. When you finished fine-tuning your table, go to the spreadsheet and start updating the data from SQL Server. When you press the Save button the changes will be saved back to your SQL Server table.

There are also several other great benefits of the Designer to easily connect an Excel spreadsheet to a table in SQL Server. For example:

  • Set which columns are editable and which are “read-only”
  • Select which rows in the database are loaded into the Excel spreadsheet
  • Enable Change Tracking and the application will then insert the date and time when a row is changed, as well as the user making the change.

Let your non-technical users update and manage the SQL Server data

After you exit the Design mode you can share your Excel document like any other Excel file. All the settings will follow the document and other users can use your Excel file to update the SQL tables from Excel.

But maybe one of the biggest benefits of SQL Spreads is its ease of use. And the benefits are not only for administrators but also for authorized users throughout your business or enterprise. Non-technical users can use SQL Server-connected Excel documents that you create and share with them. The result will be an accurate and effective collaboration with safeguards including built-in conflict detection.

Assured Data Quality

To get the highest possible quality of data, SQL Spreads uses several methods to guarantee the validity of the entered data:

  • When figures are entered, they are validated against the types of the database columns, and the user receives immediate feedback.
  • Each changed row is tracked in the database to see when a row was changed and by who.
  • A built-in conflict detection system enables safe and easy collaboration.
  • When sharing the document with others, they can be given an Editor role to disable the Design mode to protect the Excel sheet set up that you’ve created.

Familiar and User-friendly Excel Interface

The data in SQL Server tables can be directly updated from Excel. Users are authenticated using their Windows Login and can only work with the Excel documents for which they are authorized.

Data is automatically validated when users enter their figures through SQL Spreads. And data from other Microsoft Excel documents can be pasted directly into the SQL Server connected documents.

A Low-Stress Solution with High Value to Your Organization

  • Use Excel to work with data in SQL Server tables.
  • Let non-technical users work with the SQL Server data.
  • Ensure that the entered data is valid.

But more far-reaching benefits can be offered to your business or enterprise by using SQL Spreads. You will immediately see time savings across the board.

  • First, the setup is really fast and simple.
  • Second, when end-users enter data, SQL Spreads will guide them through the right way to enter the data.
  • Third, data owners will have the advantage of being able to easily access centralized data through Excel.
  • Fourth, you can put an end to struggling with importing Excel data using SSIS or maintaining VBA scripts.
  • Lastly, no more troubleshooting and correcting problems created by users altering the spreadsheet.

Those time-consuming processes and frustrations are replaced by SQL Spreads with fast, reliable data management.

Try SQL Spreads First-Hand to Take Control of your SQL Server Data Management

Try SQL Spreads by downloading the new SQL Spreads trial from this page.

There is also a demo video available showing how you can use SQL Spreads to create an Excel document to update the SQL table from Excel.

Editors note: This blog post was originally published for a previous version of SQL Spreads and has been completely revamped and updated for accuracy and comprehensiveness.


The Plan of Action in 4 Steps

  1. Review the exported CSV dataset to ensure that the SharePoint fields match the CSV data
  2. Download and install SharePoint Online Management Shell
  3. Install the PowerShell PnP library
  4. Write and run the script in PowerShell

Below is a graphic on how this process looks. Let&rsquos get started!

Step 1: Review the exported CSV dataset to ensure that the SharePoint fields match the CSV data

The first step is very important to ensure data imports to SharePoint with no errors. We need to check the CSV file columns to make sure they match the associated SharePoint fields datatypes. Let&rsquos open a CSV file and review the data set.

In my example, I have a CustomerData.csv file with a few records.

A few things to be aware of:

  • The first row of the CSV file must represent the column name
  • All the data in each column should have the same data type for a consistent result
  • If your SharePoint list has a choice column, you need to make sure that the data in the CSV file column matches all available options in the SharePoint choice column. In my example above, SharePoint list has a column named Status which is a choice column with only two options: Active and Inactive. If my CSV dataset has a line record with the status as Open, then this would import with an error

We can see in my CSV file example that Title, Address, City, and State are one-line text fields. Since the Detailed Notes column may have more than 255 characters, we will consider this field as a multiline text field. Contract Start Date is a date field and Status is a choice column with two options (Active and Inactive).

Now, let&rsquos review SharePoint list to make sure the list can accommodate the data from CSV file. Navigate to your SharePoint site and open your list. In my example, I have a SharePoint list called Customer Database.

To open SharePoint list settings, click on the gear icon at the top right corner of the screen and choose List settings from the drop-down menu.

Scroll down to the Columns section and verify the data type of your SharePoint list&rsquos columns matches all the columns in the CSV file.

In my example, I have a perfect match between the CSV file and SharePoint list. If you find inconsistency in your dataset, you will have to decide either to fix your historical data (CSV file) or change SharePoint column types. Let&rsquos move on to the next step of the process.

Step 2: Download and install SharePoint Online Management Shell

Like any other Microsoft products, SharePoint has its own management shell. It is a PowerShell module that can be used to manage SharePoint Online/SharePoint on-premises site collections and sites. You can download the SharePoint Online Management Shell here, and make sure you check the System Requirements and download the right version of the Management Shell.

Note: To use SharePoint Online Management Shell, you must have SharePoint Online Global Admin rights.

Step 3: Install PowerShell PnP library

Let&rsquos talk very briefly what PnP library is, PnP PowerShell is a library that contains PowerShell cmdlets to manage SharePoint artifacts. You can provision site and site collections, manage lists, libraries, and more. For more details, visit Microsoft&rsquos documentation &ndash PnP PowerShell overview.

To install PnP PowerShell library, open SharePoint Online Management Shell and type the cmdlet below:

Install-Module SharePointPnPPowerShellOnline

Note: To successfully run the command, make sure you open the Management Shell as an administrator.

You can use PowerShell PnP library for SharePoint online, on-premises 2016 and on-premises 2013. You will have to install different modules for each version of the product.

SharePoint Online: Install-Module SharePointPnPPowerShellOnline

SharePoint 2016: Install-Module SharePointPnPPowerShell2016

SharePoint 2013: Install-Module SharePointPnPPowerShell2013

After the module is installed, we are ready to write our script to import thousands of records from CSV file to SharePoint Online.

Step 4: Write and run the script

Before we write the script, let&rsquos consider its mechanics:

  • Connect to SharePoint Online environment
  • Load the content of the CSV file to a variable, which creates an array
  • Loop through each item in the variable (array) and create associated Item in the SharePoint Online list

First, we need to connect to our SharePoint Online environment. Again, make sure that you have SharePoint Global Administrator rights to run cmdlets below. To connect to the environment, we will save admin credentials into the variable called $credentials.

$credentials = Get-Credential -Message &ldquoPlease Enter SharePoint Online credentials&rdquo

After running this command, you will be prompted to type your username and password.

Remember that everything saved in the variables is good for running a PowerShell session. If you close and re-open management shell, you will have to re-create all the variables.

Next, we will create a variable called $Site with the URL of your SharePoint site:

$Site=&rdquohttps://SITE-URL.sharepoint.com/&rdquo

Note: You don&rsquot have to include SharePoint list/library name in the URL.

Now, using these two variables ($credentials and $Site), we will use PnP cmdlet to connect to the SharePoint Online environment:

Connect-PnPOnline -Url $Site -Credentials $credentials

To make sure that you are in the right site collection you can run Get-PnPlist cmdlet to locate SharePoint list that you are trying to work with.

According to the result of Get-PnPlist cmdlet, we are in the right place.

Next, we are going to load CSV file data into the variable named $CustomerData.

$CustomerData = Import-CSV &ldquoC:ProjectCustomerData.csv&rdquo

To verify that your data set loaded correctly, you can display the content of the CSV file. Type the name of the variable and hit enter.

If you have a large data set, you may not want to display the entire list. Run the cmdlet below to show the first three records in the dataset.

$CustomerData | Select-Object -First 3

Great, we are connected to the right environment, and we have loaded the dataset for us to use. Next, we are going to loop through each item and in the $CustomerData variable and create an associated item in the SharePoint list.


All Answers

Might work

ok, i presume I don't need to tell you to make a backup of the table and use the backup to test this on :)

Import that data.
Now.."The spreadsheet contains 5 fields, the first two of which in combination are the unique keys and are therefore used to select which records in the SQL table are to be updated. "

Does that mean that your ID in the sql table is "BG123456" (allin one column, in a single string)?

If so,either transform it on the import so that it makes a field called My_ID, or after you import, create a new col and throw it into there

UPDATE mySQLTable
SET mySQLTable.desiredcolumn = myImportTable.desiredcolumn
FROM mySQLTable
INNER JOIN myImportTable
ON mySQLTable.unique_id = myImportTable.my_id

Short and sweet.

Thanks for the quick and accurate reply. The five fields in the input correspond to five in the SQL table and the index key is a concatenation of the first and second so yes, the key is BG123456. I just concatenate the fields on either side of the 'ON' line to get what I want.
Thanks again.


Create External Tables for Excel

After creating the external data source, use CREATE EXTERNAL TABLE statements to link to Excel data from your SQL Server instance. The table column definitions must match those exposed by the CData ODBC Driver for Excel. You can refer to the Tables tab of the DSN Configuration Wizard to see the table definition.

Sample CREATE TABLE Statement

The statement to create an external table based on a Excel Sheet would look similar to the following:

Having created external tables for Excel in your SQL Server instance, you are now able to query local and remote data simultaneously. Thanks to built-in query processing in the CData ODBC Driver, you know that as much query processing as possible is being pushed to Excel, freeing up local resources and computing power. Download a free, 30-day trial of the ODBC Driver for Excel and start working with live Excel data alongside your SQL Server data today.

CData Software is a leading provider of data access and connectivity solutions. Our standards-based connectors streamline data access and insulate customers from the complexities of integrating with on-premise or cloud databases, SaaS, APIs, NoSQL, and Big Data.


Using Microsoft Query in Excel to Connect to FTP

The FTP ODBC Driver is a powerful tool that allows you to connect with live data from remote files and directories, directly from any applications that support ODBC connectivity.

Access remote data like you would a database through a standard ODBC Driver interface.

This article uses the CData ODBC driver for FTP to import data in Excel with Microsoft Query. This article also demonstrates how to use parameters with Microsoft Query.

The CData ODBC driver for FTP uses the standard ODBC interface to link FTP data with applications like Microsoft Access and Excel. Follow the steps below to use Microsoft Query to import FTP data into a spreadsheet and provide values to a parameterized query from cells in a spreadsheet.

If you have not already, first specify connection properties in an ODBC DSN (data source name). This is the last step of the driver installation. You can use the Microsoft ODBC Data Source Administrator to create and configure ODBC DSNs.

To connect to FTP or SFTP servers, specify at least RemoteHost and FileProtocol. Specify the port with RemotePort.

Set User and Password to perform Basic authentication. Set SSHAuthMode to use SSH authentication. See the Getting Started section of the data provider help documentation for more information on authenticating via SSH.

Set SSLMode and SSLServerCert to secure connections with SSL.

The data provider lists the tables based on the available folders in your FTP server. Set the following connection properties to control the relational view of the file system:

  • RemotePath: Set this to the current working directory.
  • TableDepth: Set this to control the depth of folders to list as views.
  • FileRetrievalDepth: Set this to retrieve and list files recursively from the root table.

Stored Procedures are available to download files, upload files, and send protocol commands. See the Data Model chapter of the FTP data provider documentation for more information.

You can then work with live FTP data in Excel.

  1. In Excel, open the Data tab and choose From Other Sources -> From Microsoft Query.
  2. Choose the FTP DSN. Select the option to use Query Wizard to create/edit queries.
  3. In the Query Wizard, expand the node for the table you would like to import into your spreadsheet. Select the columns you want to import and click the arrow to add them to your query. Alternatively, select the table name to add all columns for that table.
  4. The Filter Data page allows you to specify criteria. For example, you can limit results by setting a date range.
  5. If you want to use parameters in your query, select the option to edit the query in Microsoft Query.

To set a parameter in the query, you will need to modify the SQL statement directly. To do this, click the SQL button in the Query Editor. If you set filter criteria earlier, you should have a WHERE clause already in the query.

To use a parameter, use a "?" character as the wildcard character for a field's value in the WHERE clause. For example, if you are importing the MyDirectory, you can set "FilePath=?".

CData Software is a leading provider of data access and connectivity solutions. Our standards-based connectors streamline data access and insulate customers from the complexities of integrating with on-premise or cloud databases, SaaS, APIs, NoSQL, and Big Data.


Create a table in Mobile and Progressive Web App

To create a Table in a Mobile App, which you distribute as a native mobile app or Progressive Web App (PWA), fetch some data to the Screen and then add Table to the Screen. Here is an example.

  1. Right-click the Screen in the Interface tab and select Fetch Data from Database or Fetch Data from Local Storage. The Aggregate editor opens.
  2. Drag an Aggregate from the Data tab to the Aggregate editor. The data preview now shows.
  3. Go back to the Screen. Search for Table in the widget toolbox. Drag the Table to the Screen.
  4. Select the Table. From the list in Table > Properties > Source select the List that was created previously by adding the Aggregate.

Expand the Aggregate and Entity associated with the Screen. To create the columns, drag the Entity Attributes to the Table. The Table preview populates with sample the data preview.


1 Answer 1

To get this working for Python 3, I needed to change:

So just a lower case for tkinter (a global search/replace). Warwick, you have a lot of code in one big class!
The creation and formatting of the worksheet can be extracted into another function external to the simpleapp_tk class.

For example, if you were going to do that, the code inside the onButtonClick should look like:

With the start of the excel creation function starting like:

(I just did a cut/paste myself), and of course, the end of the spreadsheet creation function being like this:

the return None means that if there was an error creating the file inside the large try/except block , it will return an empty value to the GUI class. With result being nothing, the GUI will display the correct message (the same as you currently have it).

Please have a go at making this change, and don't forget to make a backup of your code before changing it.

You should encounter a minor bug or two, such as:

However I'm confident that you will be able to figure out any bugs. If you make this change, your code will be a smaller step towards being better. You will have separated the creation of the xlsx file from the GUI. In the future, if there is a problem when creating the spreadsheet, you know to look only at the make_me_a_spreadsheet function - and not at the GUI class. Much easier to track down bugs!

You can follow this methodology to improve your code, that is, extracting specific steps/actions into a separate function (with passing values in and passing them out). Please see some of the other Python examples people offer on Code Review to get more ideas.


How to Convert Excel Tables Fact Tables Using Query Editor​

If you have an excel table built for data entry, it may not be in the best format for data analysis or your data model. Using Query Editor and the UnPivot Columns, you can keep the original table and create a version that you can use for analysis as a dimensional fact table.

The table pictured below is set up for easy data entry however, it could be better for analysis. I want to treat the Week Number, Scenario, and Classification as dimensions in queries. I also want to combine the Week Number with the fiscal dates to create a DateTime dimension.

Using a little-known set of functionality in Excel, this becomes easy to do. With the top cell highlighted in the table, Scenario as the top row is just a title row. Under Data on the toolbar, then select the From Table option. The Query Editor Screen comes up.

Highlight the columns that you want to transpose, in this case, Version and Classification. In the Unpivot Columns options drop-down, select Unpivot Other Columns. This will transform into the version below.

One final edit to change the Attribute column name to WeekNum, and then go back to the Home tab and select Close & Load.

The result below is an updating table in a better format to be used in a data model. The Worksheet tab has been renamed to Fact Table in this example.

This quick tutorial shows how to take an excel sheet used for data entry and convert it to a Fact Table by using the UnPivot Columns functionality in the Data Query tools in Excel.

Let me know what you think. Does this fit into your workflow? Let me know if you have any tricks you use.


Updating attribute table from an Excel file using Python - Geographic Information Systems

A simple react component to create a spreadsheet.

  • Select cells, cut, copy and paste cells
  • Navigation using keyboard keys
  • Deletion using keyboard keys
  • Callbacks for onCellsChanged, valueRenderer(visible data)
  • dataRenderer(underlying data in the input, takes the value by default)
  • Supply your own editors and view controls with custom renderers
  • Extensive control over generated markup via custom renderers

React-Datasheet generates a table with the cells. Double-clicking or typing edits the value and if changed, initiates an onCellsChanged callback. Pasting tabular data or deleting a range of cells also calls onCellsChanged .

The data provided should be an array of rows, and each row should include the cells.

Cells with underlying data

There are two values that each cell shows. The first is via valueRenderer and the second is via dataRenderer . When a cell is in edit mode, it will show the value returned from dataRenderer . It needs to return a string as this value is set in an input field. Each of these callbacks are passed the cell value as well as the cell's coordinates in the spreadsheet. This allows you to apply formatting logic at rendering time, such as all cells in the third column should be formatted as dates.

Cells with underlying component

This renders a single cell with the value 5. Once in edit mode, the button will appear.

Cells with extra attributes

This render 2 rows, each one with two cells, the cells in the first row will have an attribute data-hint and the other 2 will not.

React-Datasheet allows you replace the renderers both for the overall structure (rows, cells, the sheet itself) as well as editors and viewers for individual cells. This allows you to radically refashion the sheet to suit your requirements.

For example, this shows how to add separate headers and a checkbox at the start of each row to control row "selected" state. It also specifies a custom view renderer and a custom editor for the first column of each row:

Note: For brevity, in this example the custom renderers are all defined as arrow functions inside of render, but using a bound function in the parent component or a separate custom component will let you avoid a lot of needless re-renders.

Option Type Description
data Array Array of rows and each row should contain the cell objects to display
valueRenderer func Method to render the value of the cell function(cell, i, j) . This is visible by default
dataRenderer func Method to render the underlying value of the cell function(cell, i, j) . This data is visible once in edit mode.
overflow 'wrap'|'nowrap'|'clip' Grid default for how to render overflow text in cells
onCellsChanged func onCellsChanged handler: function(arrayOfChanges[, arrayOfAdditions]) <> , where changes is an array of objects of the shape . See below for more details.
onContextMenu func Context menu handler : function(event, cell, i, j)
parsePaste func function (string) <> If set, the function will be called with the raw clipboard data. It should return an array of arrays of strings. This is useful for when the clipboard may have data with irregular field or line delimiters. If not set, rows will be split with line breaks and cells with tabs.
isCellNavigable func function (cell, row, col) If set, the function is used to determine whether navigation to the indicated cell should be allowed or not. If not then using cursor or tab navigation will skip over not allowed cells until it finds the next allowed cell.
handleCopy func function (< event, dataRenderer, valueRenderer, data, start, end, range >) If set, this function is called whenever the user copies cells. The return string of this function is stored on the clipboard.

The following are optional functions or React Component that can completely override the native renderers of react datasheet. To know which props are passed down, see custom renderers

Option Type Description
sheetRenderer func Optional function or React Component to render the main sheet element. The default renders a table element.
rowRenderer func Optional function or React Component to render each row element. The default renders a tr element.
cellRenderer func Optional function or React Component to render each cell element. The default renders a td element.
valueViewer func Optional function or React Component to customize the way the value for each cell in the sheet is displayed. Affects every cell in the sheet. See cell options to override individual cells.
dataEditor func Optional function or React Component to render a custom editor. Affects every cell in the sheet. See cell options to override individual cells.
selected object Optional. Whether the selection is controlled or uncontrolled. Must be an object of this format: < start: < i: number, j number >, end: < i: number, j: number >> , or null for no selection.
onSelect func Optional. function (< start, end >) <> Triggered on every selection change. start and end have the same format as the selected prop.

onCellsChanged(arrayOfChanges[, arrayOfAdditions]) handler

React-DataSheet will call this callback whenever data in the grid changes:

  • When the user enters a new value in a cell
  • When the user hits the delete key with one or more selected cells
  • When the user pastes tabular data into the table

The argument to the callback usually will be one array of objects with these properties:

Property Type Description
cell object the original cell object you provided in the data property. This may be null (see below)
row number row index of changed cell
col number column index of changed cell
value any The new cell value. This is usually a string, but a custom editor may provide any type of value.

If the change is the result of a user edit, the array will contain a single change object. If the user pastes data or deletes a range of cells, the array will contain an element for each affected cell.

Additions: If the user pastes data that extends beyond the bounds of the grid (for example, pasting two-row-high data on the last line), there will be a second argument to the handler containing an array of objects that represent the out-of-bounds data. These object will have the same properties, except:

  • There is no cell property
  • either row or col , or both, will be outside the bounds of your original grid. They will correspond to the indices the new data would occupy if you expanded your grid to hold them.

You can choose to ignore the additions, or you can expand your model to accommodate the new data.

Previously React-DataSheet supported two change handlers. These are still supported for backwards compatibility, but will be removed at some point in the future.

Option Type Description
onChange func onChange handler: function(cell, i, j, newValue) <>
onPaste func onPaste handler: function(array) <> If set, the function will be called with an array of rows. Each row has an array of objects containing the cell and raw pasted value. If the pasted value cannot be matched with a cell, the cell value will be undefined.

The cell object is what gets passed back to the onChange callback. They can contain the following options as well

Option Type Default Description
readOnly Bool false Cell will never go in edit mode
key String undefined By default, each cell is given the key of col number and row number. This would override that key
className String undefined Additional class names for cells.
component ReactElement undefined Insert a react element or JSX to this field. This will render on edit mode
forceComponent bool false Renders what's in component at all times, even when not in edit mode
disableEvents bool false Makes cell unselectable and read only
colSpan number 1 The colSpan of the cell's td element
rowSpan number 1 The rowSpan of the cell's td element
width number or String undefined Sets the cell's td width using a style attribute. Number is interpreted as pixels, strings are used as-is. Note: This will only work if the table does not have a set width.
overflow 'wrap'|'nowrap'| 'clip' undefined How to render overflow text. Overrides grid-level overflow option.
valueViewer func undefined Optional function or React Component to customize the way the value for this cell is displayed. Overrides grid-level valueViewer option.
dataEditor func undefined Optional function or React Component to render a custom editor. Overrides grid-level dataEditor option.

Each of the following custom renderers should be either a React Component or a function that takes a props argument and returns a react element (a.k.a stateless functional component). React-DataSheet will supply certain properties to each renderer.

In some cases React-DataSheet will include event handlers as properties to your custom renderer. You must hook up these handlers to your component or aspects of React-DataSheet's built-in behavior will cease to work.

Except for valueViewer and dataEditor , each custom renderer will receive react's regular props.children . Be sure to render in your custom renderer.

The sheetRenderer is responsible for laying out the sheet's main parent component. By default, React-DataSheet uses a table element. React-DataSheet will supply these properties:

Option Type Description
data Array The same data array as from main ReactDataSheet component
className String Classes to apply to your top-level element. You can add to these, but your should not overwrite or omit them unless you want to implement your own CSS also.
children Array or component The regular react props.children . You must render within your custom renderer or you won't see your rows and cells.

The rowRenderer lays out each row in the sheet. By default, React-DataSheet uses a tr element. React-DataSheet will supply these properties:

Option Type Description
row number The current row index
cells Array The cells in the current row
children Array or component The regular react props.children . You must render within your custom renderer or you won't see your cells.

The cellRenderer creates the container for each cell in the sheet. The default renders a td element. React-DataSheet will supply these properties:

Option Type Description
row number The current row index
col number The current column index
cell Object The cell's raw data structure
className String Classes to apply to your cell element. You can add to these, but your should not overwrite or omit them unless you want to implement your own CSS also.
style Object Generated styles that you should apply to your cell element. This may be null or undefined.
selected Bool Is the cell currently selected
editing Bool Is the cell currently being edited
updated Bool Was the cell recently updated
attributesRenderer func As for the main ReactDataSheet component
onMouseDown func Event handler important for cell selection behavior
onMouseOver func Event handler important for cell selection behavior
onDoubleClick func Event handler important for editing
onContextMenu func Event handler to launch default content-menu handling. You can safely ignore this handler if you want to provide your own content menu handling.
children Array or component The regular react props.children . You must render within your custom renderer or you won't your cell's data.

The valueViewer displays your cell's data with a custom component when in view mode. For example, you might show a "three star rating" component instead the number 3. You can specify a valueViewer for the entire sheet and/or for an individual cell.

React-DataSheet will supply these properties:

Option Type Description
value node The result of the valueRenderer function
row number The current row index
col number The current column index
cell Object The cell's raw data structure

The dataEditor displays your cell's data when in edit mode. You can can use any component you want, as long as you hook up the event handlers that constitute the contract between React-DataSheet and your editor. You can specify a dataEditor for the entire sheet and/or for an individual cell.


ArcGIS Noob Question: Attribute Table to Excel?

I have been trying to export an attribute table directly to Excel, but it seems to require lots of add-ons I don't now how to install.

Select all records in your attribute table, right click on that little grey box to the left of any row, select Copy Selected, then paste in a blank excel table. This is the fastest.

This is definitely the simplest and the method I generally use. Other options require creating temporary dbf or csv files. This method allows you to export out selected records quickly instead of the entire table.

Hey thanks, I knew you could do this in QGIS w/ a Ctrl-C Ctrl-V but I did not know the "copy selected" trick worked in Arc.

I did not know this. Thank you.

Alternatively just open the shpfile with the .dbf file extension in excel.

This will work but only if it is in shapefile format, if it is in a geodatabase then you cannot do this.

Learning time saving shit! Yay!

or use the "Table to Excel" tool under Conversion

You could use the export button as shown in the other answer posts, but in the drop down menu where you can choose the type, choose the "text" option (i'm not currently at work right now or iɽ post pics). Then after you name your table, instead of the .txt extension, put it as .csv and you'll be able to open it right up with excel.

open up the attribute table, click options, export table as dBASE, drag and drop the newly created file into excel. http://i.imgur.com/GfxTiHE.png

Try xtools pro, defnitely worth a download and install. It can make life a lot easier in arcmap. Not all of the features work with the free version but the export to excel does. You can pick which fields to be included and whether it exports all or only the ones selected. You then have the choice to save as an .xlsx or it can open the table directly into the application (not saved).


Watch the video: SQL - ΜΑΘΗΜΑ - ΕΝΗΜΕΡΩΣΗ-ΔΙΑΓΡΑΦΗ ΔΕΔΟΜΕΝΩΝ - ΜΕΡΟΣ 13 - Ενημέρωση με την UPDATE (October 2021).