Workbench
595 results found
-
Make spark job monitoring console available
Spark job monitoring console?
10 votes -
Color code Recent Notebooks
Color code Recent Notebooks so that it is easy to see which ones are active and which are not
2 votes -
Search button next to the Search field
Search button next to searchbox
Having to hit [Enter] after typing something in the searchbox is not intuitive. The New Notebook button looks like a Search button, but doesn't search.
3 votes -
Additional connection panel
Would like to see a panel for all relevant connections for e.g. JDBC/ODBC connections, HDFS connection, Hive, BlueMix services connections like Spark cluster , DashDB/ Cloudent connection that can be easily imported into notebook.
7 votes -
Progress bar
A progress bar with estimated time to complete..
3 votes -
Auto completion
Autocompletion for functions, classes, methods, etc. I should be able to start typing "math." and have a list of the members pop up. As I type more, the list should be limited to those members matching what I've typed so far. As I scroll down the list of members, a popup should show me the signature and description.
Basically, Intellisense!
11 votes -
Better IDE Integration with Notebook (API / Code search)
A better IDE integration in the notebook. I often struggle with what's the best code to write. So if the notebook and make suggestions so makes it easier for developers (or ppl like me who don't know many commands and are still learning), and assist in writing better code, or find commands from the API. Like we do in eclipse.
4 votes -
Source folder content
When I download/upload a file, it goes into source directory. I need a feature to show me all of my files in the/source/ folder. It prevent overwriting the files.
5 votes -
Dealing with different output types
The output of a python program can be an html file which is stored in resource folder. To see it, I have to download it on my computer and then browse it. It would be great if I could click on that in the right toolbar (i.e. recent data) and browse it directly as a new tab.
6 votes -
Downloading notebooks
I need to backup all of my notebooks on my computer.
6 votes -
Live view of slideshow
The feature of having a live slideshow option is very attractive. See RISE: "Live" Reveal.js Jupyter/IPython Slideshow Extension
5 votes -
A confirm button when deleting a notebook so I don't accidentally delete my life's work
Please, for Polong :'(
20 votesunder review ·AdminLeon Katsnelson (Director & CTO, IBM Analytics Emerging Technology, Cognitive Class Labs) responded
we are considering the best approach for preventing accidental notebook deletions
-
Search should jump to the right cell
If I use the search and click to the result it only open the Notebook and not the line or cell with the search result.
I still need to use the browser search to find the right line.Thanks,
Axel1 vote -
Access Developed Model via Web Services
Hi All,
As a feature request I would like to request that the tool or model built through the Data Scientist Workbench to be accessible through web service and API calls.
I compare this solution to Microsoft Azure's Machine Learning Studio and they have a nice feature called "Publish Web Services". This allows the data scientist to share their model to accept user's data and return prediction. This fits nicely to the direction of anything as a service.
Thanks,
Yin1 vote -
Allow adding of XML data into workbench
Please add the ability to add XML data into the workbench. I was not able to drag / drop or add via URL XML data into the workbench.
2 votes -
A team share mode, like Google Docs.
It'll make easier to work in team.
5 votes -
explain why message "A master URL must be set in your configuration" is generated
trying to run example from spark website, shown below:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConfobject SimpleApp {
def main(args: Array[String]) {
val logFile = """C:\Users\John\scalatest.txt""" // Should be some file on your system
val conf = new SparkConf().setAppName("Simple Application")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
}
}2 votes -
1 vote
-
Allow logging in with email instead of user id
It is now very common on the web to be able to log in to a service with email instead of a user id. Ideally, Data Scientist Workbench should allow users to login with either user id or email.
8 votes -
Shortened URLs (Google) for notebooks cannot be imported
Importing others' DSWB notebooks doesn't work if URL is shortened (Google)
4 votes
- Don't see your idea?