More

User interaction with Python tool midway through script running?


I need to design a Python script that will take a CSV input, search for a matching polygon in our SDE data for each line, and then extract certain information about data that intersects with the polygon.

This is straightforward, except that about 5% of the input values will not be exact matches (e.g. a parcel number will have hyphens in the CSV, but the SDE value has no hyphens). I've got a few tests in there to handle these cases, but it will inevitably result in some situations where the script returns multiple possible candidate polygons and the user needs to be able to indicate which is correct. For example, the number may pull up Polygon A and Polygon B as possibilities, and it's up to the GIS analyst to visually examine to decide which is in the correct region.

If I was running the script in IDLE, then I could simply useraw_inputto get that user input and proceed with the rest of the steps. Is there an equivalent mid-script method in ArcMap that accepts user input? I am only aware of the initial tool parameters, which wouldn't be useful for this issue.


Here is a really jankety solution using wxPython:

import wx import os def get_pid(parcels): app = wx.App(False) frame = MyForm(parcels) frame.Show() app.MainLoop() txt = os.path.join(os.environ['USERPROFILE'], r'Desktop	emp_parcel_Id.txt') with open(txt, 'r') as f: pid = f.readlines()[0].strip() os.remove(txt) return pid ######################################################################## class MyForm(wx.Frame): #---------------------------------------------------------------------- def __init__(self, pars): wx.Frame.__init__(self, None, wx.ID_ANY, "Choose Parcel") # Add a panel so it looks the correct on all platforms panel = wx.Panel(self, wx.ID_ANY) sampleList = [] self.cb = wx.ComboBox(panel, size=wx.DefaultSize, choices=sampleList) self.widgetMaker(self.cb, pars) sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.cb, 0, wx.ALL, 5) panel.SetSizer(sizer) #---------------------------------------------------------------------- def widgetMaker(self, widget, objects): """""" for obj in objects: widget.Append(obj) widget.Bind(wx.EVT_COMBOBOX, self.onSelect) #---------------------------------------------------------------------- def onSelect(self, event): """""" self.selection = self.cb.GetStringSelection() txt = os.path.join(os.environ['USERPROFILE'], r'Desktop	emp_parcel_Id.txt') with open(txt, 'w') as f: f.write(self.cb.GetStringSelection()) # Run the program if __name__ == "__main__": parcels = ['110120031', '180170020', '180150041'] pid = get_pid(parcels) print pid

There are probably better ways out there, but I do not have much experience with making GUI's in Python, but this worked for me. You can call the get_pid() from your module and pass in your parcel ID's. The way this works is the user hits the dropdown and selects a parcel ID then closes the box. The PID is then returned.

This isn't the prettiest UI, but if you play with the code a bit it can be resized.


If I remember correctly, there is no arcpy equivalent toraw_inputthat can be used in an ArcGIS environment. I would suggest a workaround. One thing you might do is have a second script for these instances. Save any intermediate data at the point where the user needs to make his or her choice. Have your second script reference the saved data as needed and have its input be the choice that needs to be made by the user.


11 Best Python Monitoring Tools

If you’re looking to monitor and enhance your troubleshooting capabilities for your Python projects, you’re in the right place. In this article, we’ll break down some of the best Python monitoring tools you can use today.

Here is our list of the best Python monitoring tools:

    This monitoring system is able to track the performance of web apps and APIs that are written in a range of programming languages, including Python. Available for Windows Server and Linux.
  1. DataDog Python Application Performance Ideal for dev teams and businesses that need feature-rich monitoring with prebuilt and customizable templates.
  2. Retrace APM Provides code level tracing and easy-to-use tools for isolation.
  3. New Relic Automatically maps out your network to discovery dependencies.
  4. AppDynamics A full feature performance monitor that works with Python and dozens of other apps.
  5. Scout APM Very lightweight and simple to install.
  6. Dynatrace Provides detailed topology of your applications with geographic data.
  7. Atatus Drills down right to the source code level from the dashboard to highlight issues.
  8. Prometheus Open-source application monitor with full HTTP API integrations.
  9. Sentry.io Cloud-based APM with Breadcrumb event tracking features.
  10. Metricly Uses anomaly detection to help prevent false positives.

How to force PowerShell to not allow an interactive command window

I am a Citrix administrator and would like to restrict the general user population on our servers from using PowerShell to run their own scripts, or to use it interactively. We are already disallowing the use of the command prompt via GPO, but with PowerShell available, that's basically useless.

I've found that Powershell.exe has a command-line option of -NoInteractive which will allow a user to run a script, but it does not provide them an interactive command prompt. The problem is that I have not found a way to force powershell to operate this way. I even went so far as to create a C:WindowsSystem32WindowsPowerShellv1.0Microsoft.PowerShell_profile.ps1 launch script that would check for the -NoInteractive parameter, but users can bypass that by simply launching Powershell.exe with the -NoProfile parameter.

The other problem is that we do use a lot of PowerShell scripts to launch applications for users and portions of the login script are written in PowerShell and need to run under the user context, so I can't simple ACL the EXE file. I need them to be able to use PowerShell, just not interactively. Ultimately, we want to enforce the AllSigned execution policy and sign all scripts so the only thing a user can run is a script that we (the admins) have created and/or signed off on.

I've tried googling for this answer and found many people using -NoInteractive , but I haven't found an instance where someone has tried to force it. Any ideas?


AWS SDK для Python (boto3)

Начните работу с AWS в кратчайшие сроки, используя boto3 (AWS SDK для Python). Boto3 упрощает интеграцию приложений, библиотек и скриптов Python с такими сервисами AWS, как Amazon S3, Amazon EC2, Amazon DynamoDB и другие сервисы.

Install

Основные возможности

В boto3 есть два разных уровня API. Клиентские, или низкоуровневые, API обеспечивают привязку один-в-один к базовым операциям API HTTP. API ресурсов скрывают явные сетевые вызовы, предоставляя взамен ресурсные объекты и наборы объектов для доступа к атрибутам и выполнения действий. Пример

Современный и последовательный интерфейс

Как клиентский, так и ресурсный интерфейсы boto3 динамически генерируют классы на основе моделей JSON, описывающие API AWS. Это позволяет быстро предоставлять обновления с строгой непротиворечивостью для всех поддерживаемых сервисов.

Поддержка Python 2 и 3

В boto3 изначально предусмотрена поддержка Python версий 2.7+ и 3.4+.

В boto3 имеются функции waiter, которые автоматически выполняют опрос предопределенных изменений состояния ресурсов AWS. Например, вы можете запустить инстанс Amazon EC2 и, воспользовавшись функцией waiter, дождаться его перехода в рабочее состояние или создать новую таблицу Amazon DynamoDB и дождаться, когда она станет доступной для использования. В boto3 есть функции waiter как для клиентских, так и для ресурсных API.

Высокоуровневые функции для различных сервисов

В boto3 есть много функций для конкретных сервисов, такие как автоматическая многопотоковая передача для сервиса Amazon S3 или упрощенные условия запросов для Amazon DynamoDB.


Pressing Command + Q on both machines will exit the migration assistant.

I have a feeling that it copies all files to temporary location before installing them/creating user accounts. So you should be fine. Depending on how far in you might have problems.

The obvious answer is the migration is not supposed to be stopped, but you might disable the network or otherwise halt the machine that is sending the data.

It should be fine since it's just sending data.

The receiving mac might handle the interruption well or not. Something was working on the mac to start the migration assistant - so you should be able to get back to that state fairly easily.

There are specific steps to clean up based on what part of the transfer was in progress.

Safest is to go back to a sane backup and attempt migration again with a faster connection or more time.

Post what happens here or as a follow on question - it's pretty easy to clean up the user accounts if they are the part that got interrupted (instead of apps, system settings, or random non-user files being transferred)

Half a user is usually what you end up with and that's not good in general.


PySAL Components

PySAL is a family of packages for spatial data science and is divided into four major components:

solve a wide variety of computational geometry problems including graph construction from polygonal lattices, lines, and points, construction and interactive editing of spatial weights matrices & graphs - computation of alpha shapes, spatial indices, and spatial-topological relationships, and reading and writing of sparse graph data, as well as pure python readers of spatial vector data. Unike other PySAL modules, these functions are exposed together as a single package.

    : libpysal provides foundational algorithms and data structures that support the rest of the library. This currently includes the following modules: input/output ( io ), which provides readers and writers for common geospatial file formats weights ( weights ), which provides the main class to store spatial weights matrices, as well as several utilities to manipulate and operate on them computational geometry ( cg ), with several algorithms, such as Voronoi tessellations or alpha shapes that efficiently process geometric shapes and an additional module with example data sets ( examples ).

Explore

The explore layer includes modules to conduct exploratory analysis of spatial and spatio-temporal data. At a high level, packages in explore are focused on enabling the user to better understand patterns in the data and suggest new interesting questions rather than answer existing ones. They include methods to characterize the structure of spatial distributions (either on networks, in continuous space, or on polygonal lattices). In addition, this domain offers methods to examine the dynamics of these distributions, such as how their composition or spatial extent changes over time.

esda : esda implements methods for the analysis of both global (map-wide) and local (focal) spatial autocorrelation, for both continuous and binary data. In addition, the package increasingly offers cutting-edge statistics about boundary strength and measures of aggregation error in statistical analyses

giddy : giddy is an extension of esda to spatio-temporal data. The package hosts state-of-the-art methods that explicitly consider the role of space in the dynamics of distributions over time

inequality : inequality provides indices for measuring inequality over space and time. These comprise classic measures such as the Theil T information index and the Gini index in mean deviation form but also spatially-explicit measures that incorporate the location and spatial configuration of observations in the calculation of inequality measures.

pointpats : pointpats supports the statistical analysis of point data, including methods to characterize the spatial structure of an observed point pattern: a collection of locations where some phenomena of interest have been recorded. This includes measures of centrography which provide overall geometric summaries of the point pattern, including central tendency, dispersion, intensity, and extent.

segregation : segregation package calculates over 40 different segregation indices and provides a suite of additional features for measurement, visualization, and hypothesis testing that together represent the state-of-the-art in quantitative segregation analysis.

spaghetti : spaghetti supports the the spatial analysis of graphs, networks, topology, and inference. It includes functionality for the statistical testing of clusters on networks, a robust all-to-all Dijkstra shortest path algorithm with multiprocessing functionality, and high-performance geometric and spatial computations using geopandas that are necessary for high-resolution interpolation along networks, and the ability to connect near-network observations onto the network

Model

In contrast to explore , the model layer focuses on confirmatory analysis. In particular, its packages focus on the estimation of spatial relationships in data with a variety of linear, generalized-linear, generalized-additive, nonlinear, multi-level, and local regression models.

mgwr : mgwr provides scalable algorithms for estimation, inference, and prediction using single- and multi-scale geographically-weighted regression models in a variety of generalized linear model frameworks, as well model diagnostics tools

spglm : spglm implements a set of generalized linear regression techniques, including Gaussian, Poisson, and Logistic regression, that allow for sparse matrix operations in their computation and estimation to lower memory overhead and decreased computation time.

spint : spint provides a collection of tools to study spatial interaction processes and analyze spatial interaction data. It includes functionality to facilitate the calibration and interpretation of a family of gravity-type spatial interaction models, including those with production constraints, attraction constraints, or a combination of the two.

spreg : spreg supports the estimation of classic and spatial econometric models. Currently it contains methods for estimating standard Ordinary Least Squares (OLS), Two Stage Least Squares (2SLS) and Seemingly Unrelated Regressions (SUR), in addition to various tests of homokestadicity, normality, spatial randomness, and different types of spatial autocorrelation. It also includes a suite of tests for spatial dependence in models with binary dependent variables.

spvcm : spvcm provides a general framework for estimating spatially-correlated variance components models. This class of models allows for spatial dependence in the variance components, so that nearby groups may affect one another. It also also provides a general-purpose framework for estimating models using Gibbs sampling in Python, accelerated by the numba package.

tobler : tobler provides functionality for for areal interpolation and dasymetric mapping. Its name is an homage to the legendary geographer Waldo Tobler a pioneer of dozens of spatial analytical methods. tobler includes functionality for interpolating data using area-weighted approaches, regression model-based approaches that leverage remotely-sensed raster data as auxiliary information, and hybrid approaches.

access : access aims to make it easy for analysis to calculate measures of spatial accessibility. This work has traditionally had two challenges: [1] to calculate accurate travel time matrices at scale and [2] to derive measures of access using the travel times and supply and demand locations. access implements classic spatial access models, allowing easy comparison of methodologies and assumptions.

spopt: spopt is an open-source Python library for solving optimization problems with spatial data. Originating from the original region module in PySAL, it is under active development for the inclusion of newly proposed models and methods for regionalization, facility location, and transportation-oriented solutions.

The viz layer provides functionality to support the creation of geovisualisations and visual representations of outputs from a variety of spatial analyses. Visualization plays a central role in modern spatial/geographic data science. Current packages provide classification methods for choropleth mapping and a common API for linking PySAL outputs to visualization tool-kits in the Python ecosystem.

legendgram : legendgram is a small package that provides "legendgrams" legends that visualize the distribution of observations by color in a given map. These distributional visualizations for map classification schemes assist in analytical cartography and spatial data visualization

mapclassify : mapclassify provides functionality for Choropleth map classification. Currently, fifteen different classification schemes are available, including a highly-optimized implementation of Fisher-Jenks optimal classification. Each scheme inherits a common structure that ensures computations are scalable and supports applications in streaming contexts.

splot : splot provides statistical visualizations for spatial analysis. It methods for visualizing global and local spatial autocorrelation (through Moran scatterplots and cluster maps), temporal analysis of cluster dynamics (through heatmaps and rose diagrams), and multivariate choropleth mapping (through value-by-alpha maps. A high level API supports the creation of publication-ready visualizations


Find the Data

The VIIRS DNB NTL image layer shows Earth’s surface and atmosphere using a sensor designed to capture low-light emission sources under varying illumination conditions.

Black Marble Nighttime Blue/Yellow Composite showing the Nile River Delta on May 7, 2021. The Blue/Yellow Composite is a false color image created using the VIIRS at-sensor radiance and brightness temperatures from the M15 band. Interactively explore this image using NASA Worldview. NASA Worldview image.

NASA has developed the Black Marble, a daily calibrated, corrected, and validated product suite that enables effective use of NTL data for scientific observations. Black Marble's standard science processing removes cloud-contaminated pixels and corrects for atmospheric, terrain, vegetation, snow, lunar, and stray light effects on VIIRS DNB radiances. Black Marble data products are available at LAADS DAAC. Black Marble also provides a Nighttime Blue/Yellow Composite which is a false color composite created using the VIIRS at-sensor radiance and brightness temperatures from the M15 band. This color combination is especially useful for first responders, enhancing their ability to detect power outages.

Users can visualize and acquire NTL images through NASA's Global Imagery Browse Services (GIBS). The GIS Data Pathfinder's geospatial services section provides information for the GIBS web map service (WMS).

Screenshot showing how to add VIIRS Day Night Band At Sensor Radiance into QGIS via the GIBS WMS. NASA image.

The LAADS DAAC offers a Black Marble HDF to GeoTIFF Converter tool, which is a Python script to read, convert (GeoTIFF), and display Black Marble files (if running within a GIS python console). See the Tools section for more information on using this tool.

To learn more about Black Marble data and applications, see the NASA Applied Remote Sensing Training (ARSET) program: Introduction to NASA's "Black Marble" Night Lights Data.


MLOps requirements

The team had to meet several key requirements to scale up the solution from the pilot field study. Originally, only a few models were developed for a single sales region. They had to deploy a wider-scale implementation that enabled the development of custom machine learning models for all sales regions that included:

Weekly batch processing for large and small stores in each region including retraining of each model with new datasets.

Continuous refinement of the machine learning models.

Integration of the development/test/package/test/deploy process common to CI/CD in a DevOps-like processing environment for MLOps.

This represents a shift in how data scientists and data engineers have commonly worked in the past.

A unique model that represented each region for large and small stores based on the stores' history, demographics, and other key variables. The model had to process the whole dataset to minimize the risk of processing error.

The ability to initially upscale to support 14 sales regions with plans to upscale further.

Plans for additional models for longer term forecasting for regions and other store clusters.


You can't.

Longer term plans (e.g. 14.04)

Move Python 2 to universe, port all Python applications in main to Python 3. We will never fully get rid of Python 2.7, but since there will also never be a Python 2.8, and Python 2.7 will be nearly 4 years old by the time of the 14.04 LTS release, it is time to relegate Python 2 to universe.

This means that a lot of base packages have hard dependencies on 2.7 and it will still take a lot of time tot get things migrated. Note that Python 3 has numerous backwards incompatible changes -- it's not a regular package upgrade.

If you really want to get rid of Python 2.7, you'll have to wait for the 14.04 release, but there's no guarantee.

Came here in 2019 because I develop in Python3 by default and came to the same conclusion as OP after seeing what'd be removed after running apt purge python

Since what I really wanted was to call Python3 with just python , I ran

This way, if Python2.7 is still needed, it can be called explicitly with python2.7 while simply calling python will default to Python3 because of the symbolic link.

I don't have any bash level scripts that call python2.7 with python so this change wouldn't be disruptive - while other systems would need their scripts adjusted accordingly if they did.

The main barrier to a distribution switching the python command from python2 to python3 isn't breakage within the distribution, but instead breakage of private third party scripts developed by sysadmins and other users.

This answer isn't a direct response to OP, but as someone who had a similar question this is the functionality I was looking for when I was thinking of removing 2.7. Rather than delete, just prioritize which one gets to use python .


Watch the video: Running Python Program as Background Process (October 2021).