Thursday, 28 February 2013

NetSurf on Virtual Acorn

On 27th January 2013 David Pitt wrote most cogently:

> I have saved logs from a successful fetch with NetSurf 2.9 and a
> timeout from NetSurf 3.0 #833. The full logs are at :-
>
> http://pittdj.co.uk/temp/NSlogs.zip
>
> The interesting bits are :-
>
> NetSurf 3 #833 on Mac OS 10.8.2 - timed out
>
> (3.240000) desktop/browser.c browser_window_go_post 855: bw
> 0x25baa658, url http://www.google.co.uk/
> (3.240000) desktop/browser.c browser_window_go_post 976: Loading
> 'http://www.google.co.uk/'
> (3.240000) content/fetchers/curl.c fetch_curl_setup 366: fetch
> 0x25bd8120, url 'http://www.google.co.uk/'
> (33.250000) content/fetchers/curl.c fetch_curl_done 843: done
> http://www.google.co.uk/
> (33.250000) content/fetchers/curl.c fetch_curl_done 880: Unknown cURL
> response code 28
> (33.250000) content/fetchers/curl.c fetch_curl_stop 725: fetch
> 0x25bd8120,
> url 'http://www.google.co.uk/'
> (33.260000) riscos/gui.c warn_user 2339: Resolving timed out after
> 30000 milliseconds (null)
>
> NetSurf 2.90 on Mac OS 10.8.2 - successful
>
> (2.990000) desktop/browser.c browser_window_go_post 832: bw
> 0x25ba6948, url http://www.google.co.uk/
> (2.990000) desktop/browser.c browser_window_go_post 953: Loading
> 'http://www.google.co.uk/'
> (2.990000) content/fetchers/curl.c fetch_curl_setup 365: fetch
> 0x25b99e08,
> url 'http://www.google.co.uk/'
> (3.200000) content/fetchers/curl.c fetch_curl_process_headers 1171:
> HTTP status code 200
> (3.200000) content/fetchers/curl.c fetch_curl_done 813: done
> http://www.google.co.uk/
> (3.200000) content/fetchers/curl.c fetch_curl_stop 695: fetch
> 0x25b99e08,
> url 'http://www.google.co.uk/'

I am still having the same problem with bleeding edge development
versions, though 2.9 works well.

Does anyone have a resolution of this problem, or are there any
prospects of a resolution?

--
Tim Powys-Lybbe tim@powys.org
for a miscellany of bygones: http://powys.org/

Re: [Rpcemu] Network setup question

In article <20130228103957.GA9318@spod.org>,
Peter Howkins <rpcemu.howkins@marutan.net> wrote:
> On Thu, Feb 28, 2013 at 06:20:28AM +0000, Dave Symes wrote:
> > The question: It says in the Networking info (Windows side) that
> > because XP doesn't have a native TAP driver, one needs to be
> > installed, and that's what happened some while back when I did get
> > RPCEmu networking on an XP install.
> >
> > Does Win-7 Pro have its own TAP driver, if so, does it obviate
> > installing an additional TAP driver?

> All versions of windows need the tap driver installed, I'll tweak the
> docs.

> Peter

Thank Tony and Peter.

Dave

--

Dave Triffid

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: [Rpcemu] Network setup question

On Thu, Feb 28, 2013 at 06:20:28AM +0000, Dave Symes wrote:
> The question:
> It says in the Networking info (Windows side) that because XP doesn't have
> a native TAP driver, one needs to be installed, and that's what happened
> some while back when I did get RPCEmu networking on an XP install.
>
> Does Win-7 Pro have its own TAP driver, if so, does it obviate installing
> an additional TAP driver?

All versions of windows need the tap driver installed, I'll tweak the
docs.

Peter

--
Peter Howkins
peter.howkins@marutan.net

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: [Rpcemu] Network setup question

On 28 Feb 2013, Dave Symes <dave@triffid.co.uk> wrote:

[snip]

> It says in the Networking info (Windows side) that because XP doesn't
> have a native TAP driver, one needs to be installed, and that's what
> happened some while back when I did get RPCEmu networking on an XP
> install.
>
> Does Win-7 Pro have its own TAP driver, if so, does it obviate
> installing an additional TAP driver?

I suspect that Win7 is the same as WinXP. Win7 f1 Help draws a blank on
'virtual ethernet adaptor', and a Google search for 'windows 7 virtual
ethernet adaptor' finds plenty of instructions as to how to install one.

> If not, I guess continuing with the method in the info is the way to
> go?

Here, I installed the TAP driver on a Win7 32bit machine, and it works.

Tony




_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: BBC News websites

On 27 Feb 2013 Martin Bazley wrote:

> If you're still using 2.9, you don't need to be told. Simply open
> WWW.NetSurf.Log in your computer's ScrapDir (note: this only works when
> NetSurf is *not* running) and it should be right there in the first few
> lines.

Thanks for the information. I hadn't thought of looking there.

I normally use a recent development version, currently #942, but that
doesn't exhibit the redirect to mobile problem so I used 2.9 to
illustrate it. My original complaint to the BBC was raised last
November.

--
Richard Porter http://www.minijem.plus.com/
mailto:ricp@minijem.plus.com
I don't want a "user experience" - I just want stuff that works.

Wednesday, 27 February 2013

[Rpcemu] Network setup question

Good morning...
I've just repaired a broken PC which now has Win-7 Pro 64bit SP1 as the
OS, I've moved over a copy of RPCEmu 0.8.9 running RO 6.20 (Works okay)
from another computer.

On this repaired machine (With no VRPC) I can actually set up RPCEmu with
networking.

The question:
It says in the Networking info (Windows side) that because XP doesn't have
a native TAP driver, one needs to be installed, and that's what happened
some while back when I did get RPCEmu networking on an XP install.

Does Win-7 Pro have its own TAP driver, if so, does it obviate installing
an additional TAP driver?

If not, I guess continuing with the method in the info is the way to go?

Thanks
Dave

--

Dave Triffid

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: BBC News websites

The following bytes were arranged on 27 Feb 2013 by Richard Porter :

> The problem still affects version 2.9 if not the latest dev versions.
> Can you tell me what the useragent details are (or were)?

If you're still using 2.9, you don't need to be told. Simply open
WWW.NetSurf.Log in your computer's ScrapDir (note: this only works when
NetSurf is *not* running) and it should be right there in the first few
lines.

Here, using #909, the sixth line of the log file is:

(0.890000) utils/useragent.c user_agent_build_string 68: Built user
agent "NetSurf/3.0 (RISC OS)"

--
__<^>__ Red sky in the morning: Shepherd's warning
/ _ _ \ Red sky at night: Shepherd's delight
( ( |_| ) ) Mince and potatoes: Shepherd's pie
\_> <_/ ======================= Martin Bazley ==========================

Re: BBC News websites

On 15 Jan 2013 Michael Drake wrote:

> In article
> <OUT-50F5C18E.MD-1.4.17.chris.young@unsatisfactorysoftware.co.uk>,
> Chris Young <chris.young@unsatisfactorysoftware.co.uk> wrote:

>> I'm not sure I necessarily agree with this workaround - a website
>> deciding that everything running ARM must want the mobile version of a
>> page is making a pretty big assumption

> I decided there was no good reason for leaking processor architecture
> information anyway.

After a long wait I eventually got a response to my complaint. Well,
the first one suggested a cookie problem, but having disabused them of
that idea they sent this:

"We understand that you are still being directed to the mobile version
of the BBC iPlayer site rather than the desktop version.

"If you can give us the name and version number of your web browser,
your PC's operating system and if known, your useragent' details we
can investigate this further."

The problem still affects version 2.9 if not the latest dev versions.
Can you tell me what the useragent details are (or were)?

Richard
--
Richard Porter http://www.minijem.plus.com/
mailto:ricp@minijem.plus.com
I don't want a "user experience" - I just want stuff that works.

Re: Failure to post bug report.

In message <OUT-512D18E5.MD-1.4.17.chris.young@unsatisfactorysoftware.co.uk>
"Chris Young" <chris.young@unsatisfactorysoftware.co.uk> wrote:

> On Tue, 26 Feb 2013 20:00:35 GMT, Dave Higton wrote:
>
> > Three different results with #949 json (JS enabled):
> >
> > 1) georgesregisjazzband: causes a crash.
> >
> > 2) document-records: renders OK.
> >
> > 3) amiga.org: infinite fetching.
>
> With the absolute latest and JS enabled, amiga.org fetches for ages and
> then displays a message which simply says "OK" (reloading then displays it
> correctly). At one point it crashed, I didn't save the crashlog though.

#953 gets them all OK (there is a long delay with amiga.org but that
can't be NS's fault).

My thanks to all the team!

Dave

____________________________________________________________
FREE 3D EARTH SCREENSAVER - Watch the Earth right on your desktop!
Check it out at http://www.inbox.com/earth

Re: [Rpcemu] Disc not understood

In article <ab8b192453.old_coaster@old_coaster.yahoo.co.uk>,
Tony Moore <old_coaster@yahoo.co.uk> wrote:

> Are you certain that the hd4.hdf file on the Sony machine has not been
> corrupted?

Absolutely certain. I moved the file from the samsung where it worked and
tried it on the Sony where it didn't. I then moved it back to the samsung
and it worked fine.

> The hd4.hdf files available at http://www.marutan.net/rpcemu/
> are pre-formatted, and re-formatting should be unnecessary.

I agree, it works fine on the samsung.

> [snip HForm screengrabs]

> From the screengrabs, it seems that HForm must be running on both
> machines, presumably in HostFS.

Indeed.

> If that is so, I would suggest that, on the Sony machine, you forget
> about ADFS - for now - and boot from HostFS. To do that: on the Samsung
> machine, copy !Boot from ADFS to HostFS. Then move !Boot from Samsung
> Program Files\RPCEmu\hostfs to Sony Program Files\RPCEmu\hostfs (via
> network, or flash-drive). This should ensure that filetypes/extensions
> are not screwed-up.

> On the Sony machine, run RPCEmu and, when it stops with 'disc not found'
> error, or whatever, issue the commands

> *configure filesystem hostfs
> *configure boot

> then quit and re-start RPCEmu. If booting is stopped by another error,
> try *unplug , then *rmreinit each unplugged module.

> When you have a working machine you could then further investigate the
> reason for hd4.hdf not being recognised.

Sounds good, I'll give that a go, thanks.

Cheers,

Bob.

--
Bob Latham
Stourbridge, West Midlands

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: [Rpcemu] MDF query

On 26 Feb 2013, Bob Latham <bob@mightyoak.org.uk> wrote:
> In article <fd2faf2353.dougjwebb@doug.j.webb.btinternet.com>,

[snip]

> > So download the HardDisc4 self extracting file, change it's file
> > type to &FFC, i.e select highlight the file > menu > Set file type
> > &FFC, then double click on it to extract. All this is mentioned by
> > clicking on the blue information icon to the right of the files on
> > the ROOL website.
>
> Hi Doug,
>
> Thanks for the help.
>
> Tried that, get an error, "no room to run transient"
>
> I assume this is because I don't have RISC OS 5 machine.

The next slot may be too small. However, I believe that simply
dragging-out the next slot, in the task manager, will not work, and
you'll need to re-configure it, say

*wimpslot -next 30720K

(ie 30MB) then re-try extracting the file.

Tony




_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: Failure to follow on-page links

Richard Porter wrote

> On 26 Feb 2013 John Rickman Iyonix wrote:

>>> I suspect that NS can't cope with local links in a scrolling pane with
>>> absolute positionning in CSS.

>> NS does not do absolute positioning.

> Yes of course it does. The frames are located by absolute positioning,
> not tables. The page layout is OK but NS won't jump to on-page name
> anchors within the scrolling text.

Sorry I thought you were referring to to CSS fixed positioning. I
needed to fix the position of a section of a web page when it was
scrolled and discovered that fixed positioning was not yet
implemented.
John

--
John Rickman - http://rickman.orpheusweb.co.uk/lynx
For ye have the poor always with you; but me ye have not always.

Tuesday, 26 February 2013

Re: Failure to follow on-page links

In message <710c4a2453.iyojohn@rickman.argonet.co.uk>
John Rickman Iyonix <rickman@argonet.co.uk> wrote:

>Graham Pickles wrote
>
>> In message <ff7a292453.iyojohn@rickman.argonet.co.uk>
>> John Rickman Iyonix <rickman@argonet.co.uk> wrote:
>
>>>Richard Porter wrote
>>>
>>>> On this page and similar ones on the site
>>>
>>>> http://www.rdtev.de/aktuelles.html
>>>
>>>> the navigation links on the left work fine but the on-page links in
>>>> the lower right hand pane (which scrolls) do not work. The layout is
>>>> defined in the css file. There is no javascript. If you click on a
>>>> local link NS does a bit of processing but leaves the page unchanged.
>>>> There are name tags with both name and ID specified.
>>>
>>>> I suspect that NS can't cope with local links in a scrolling pane with
>>>> absolute positionning in CSS.
>>>
>>>NS does not do absolute positioning.
>>>
>
>> Yet ? or Never ?
>
>from the progress page on the NetSurf website :-
>
> CSS properties
>
> Title Status Notes
> ...
> position In progress Fixed position not implemented.
>
>John
>
Many thanks.

Regards,

--
Graham Pickles
www.whitbymuseum.org.uk Whitby Museum

Re: Parser breaking with ";" in the text field.

On Tue, 2013-02-26 at 07:09 +0000, John-Mark Bell wrote:
> On Mon, 2013-02-25 at 23:37 -0500, Anil Jangam wrote:
> > Team,
> >
> >
> > I observed that HTML parser (hubbub-0.1.2) is breaking when it finds a
> > SEMICOLON in the text field. I am giving below an example of the text
> > string.
>
> [...]
>
> > <meta http-equiv="content-type" content="text/html; charset=UTF-8" />
>
> > When it finds the ';', it stops working. When I remove this ';' from
> > the string, it works fine. Can you please check, if this is an issue
> > with the parser or if I am missing anything?
>
> Can you explain what you mean by "stops working"?

It turns out, this was the result of a bug in the example code: it
completely failed to correctly reset the world when the encoding
changed.

Fixed in git:
http://git.netsurf-browser.org/libhubbub.git/commit/?id=4e091eb81e7a5ada5d8aafa7990d094f276f2099

Thanks,


J.

Re: Failure to follow on-page links

Graham Pickles wrote

> In message <ff7a292453.iyojohn@rickman.argonet.co.uk>
> John Rickman Iyonix <rickman@argonet.co.uk> wrote:

>>Richard Porter wrote
>>
>>> On this page and similar ones on the site
>>
>>> http://www.rdtev.de/aktuelles.html
>>
>>> the navigation links on the left work fine but the on-page links in
>>> the lower right hand pane (which scrolls) do not work. The layout is
>>> defined in the css file. There is no javascript. If you click on a
>>> local link NS does a bit of processing but leaves the page unchanged.
>>> There are name tags with both name and ID specified.
>>
>>> I suspect that NS can't cope with local links in a scrolling pane with
>>> absolute positionning in CSS.
>>
>>NS does not do absolute positioning.
>>

> Yet ? or Never ?

from the progress page on the NetSurf website :-

CSS properties

Title Status Notes
...
position In progress Fixed position not implemented.

John

--

Re: Failure to post bug report.

On 26 Feb, cvjazz@waitrose.com wrote:
> In article <14ba382453.pnyoung@pnyoung.ormail.co.uk>,
> Peter Young <pnyoung@ormail.co.uk> wrote:
> > On 26 Feb 2013 Dave Higton <dave@davehigton.me.uk> wrote:

> > > In message <ae6af42353.pnyoung@pnyoung.ormail.co.uk>
> > > Peter Young <pnyoung@ormail.co.uk> wrote:

> > >> BTW I managed to report the bug using an older version of NetSurf; #723 it
> > >> was, which I happened to have lying around.

> > > I've just submitted a bug report using #949 json (JS enabled).

> > Thanks. Whatever the problem is, it looks as if it's being worked on.

> Thanks from here also.

Progress.


http://www.georgesregisjazzband.com/


http://www.document-records.com/

Both working in #951

Well done chaps. (I don't think there are any chapesses involved).


http://www.amiga.org

Still not but I'm sure they'll get there.

--
Chris

Re: Failure to post bug report.

In article <14ba382453.pnyoung@pnyoung.ormail.co.uk>,
Peter Young <pnyoung@ormail.co.uk> wrote:
> On 26 Feb 2013 Dave Higton <dave@davehigton.me.uk> wrote:

> > In message <ae6af42353.pnyoung@pnyoung.ormail.co.uk>
> > Peter Young <pnyoung@ormail.co.uk> wrote:

> >> BTW I managed to report the bug using an older version of NetSurf; #723 it
> >> was, which I happened to have lying around.

> > I've just submitted a bug report using #949 json (JS enabled).

> Thanks. Whatever the problem is, it looks as if it's being worked on.

Thanks from here also.

--
Chris

Re: Failure to post bug report.

On 26 Feb 2013 Dave Higton <dave@davehigton.me.uk> wrote:

> In message <ae6af42353.pnyoung@pnyoung.ormail.co.uk>
> Peter Young <pnyoung@ormail.co.uk> wrote:

>> BTW I managed to report the bug using an older version of NetSurf; #723 it
>> was, which I happened to have lying around.

> I've just submitted a bug report using #949 json (JS enabled).

Thanks. Whatever the problem is, it looks as if it's being worked on.

With best wishes,

Peter.

--
Peter Young (zfc Ta) and family
Prestbury, Cheltenham, Glos. GL52, England
http://pnyoung.orpheusweb.co.uk
pnyoung@ormail.co.uk

Re: libwapcaplet - memory leak?

On Tue, 26 Feb 2013 13:11:01 +0000, Rob Kendrick wrote:

> On Tue, Feb 26, 2013 at 01:08:33PM +0000, Daniel Silverstone wrote:
> > If this is *truly* a use-case we need to think about, I'll ponder making the
> > root somehow static so it gets unloaded automatically, although that might
> > complicate rehashing.
>
> The ugly destructor approach could be workable: if for some reason a
> host's compiler doesn't support it, we can wrap it in a suitable #ifdef
> and they can deal with the leak.

I would have thought a destructor would be the simplest option.

The reason I picked up on this is because I'm doing some experiments
with libwapcaplet and on-chip memory on the AMCC460ex (SAM460), as it
looked a prime candidate for getting a bit of a speed boost. I
quickly noticed that there was no freeing of the context and bucket,
and therefore no way I could ultimately free the OCM up so other
processes could use it.

I don't think we should be leaking memory at all (even if it is only
4K), but not being able to free a system resource is a major problem.

Chris

Re: Failure to post bug report.

On Tue, 26 Feb 2013 20:00:35 GMT, Dave Higton wrote:

> Three different results with #949 json (JS enabled):
>
> 1) georgesregisjazzband: causes a crash.
>
> 2) document-records: renders OK.
>
> 3) amiga.org: infinite fetching.

With the absolute latest and JS enabled, amiga.org fetches for ages
and then displays a message which simply says "OK" (reloading then
displays it correctly). At one point it crashed, I didn't save the
crashlog though.

Chris

Re: Failure to post bug report.

In message <ae6af42353.pnyoung@pnyoung.ormail.co.uk>
Peter Young <pnyoung@ormail.co.uk> wrote:

> BTW I managed to report the bug using an older version of NetSurf; #723 it
> was, which I happened to have lying around.

I've just submitted a bug report using #949 json (JS enabled).

Dave

____________________________________________________________
Share photos & screenshots in seconds...
TRY FREE IM TOOLPACK at http://www.imtoolpack.com/default.aspx?rc=if1
Works in all emails, instant messengers, blogs, forums and social networks.

Re: Failure to post bug report.

In message <OUT-512C07F1.MD-1.4.17.chris.young@unsatisfactorysoftware.co.uk>
"Chris Young" <chris.young@unsatisfactorysoftware.co.uk> wrote:

> On Mon, 25 Feb 2013 23:13:07 +0000 (GMT), Chris Newman wrote:
>
> > > Trying to access http://www.document-records.com/ fails. It gets stuck
> > > on "Fetching, Fetching, Processing"; no timeout after two minutes,
> > > after which I got bored. This is with JavaScript either on or off. It
> > > works on Windows Firefox
> >
> > Using #948 ditto with these...
> >
> > http://www.georgesregisjazzband.com/
> >
> >
> > http://www.document-records.com/
> >
> >
> > Can others verify this, please? What's going on?
>
> Same here, also getting it with http://www.amiga.org

Three different results with #949 json (JS enabled):

1) georgesregisjazzband: causes a crash.

2) document-records: renders OK.

3) amiga.org: infinite fetching.

I've submitted bug report 3606107 including the crash log for (1).

Dave

____________________________________________________________
FREE 3D MARINE AQUARIUM SCREENSAVER - Watch dolphins, sharks & orcas on your desktop!
Check it out at http://www.inbox.com/marineaquarium

Re: Failure to follow on-page links

In message <ff7a292453.iyojohn@rickman.argonet.co.uk>
John Rickman Iyonix <rickman@argonet.co.uk> wrote:

>Richard Porter wrote
>
>> On this page and similar ones on the site
>
>> http://www.rdtev.de/aktuelles.html
>
>> the navigation links on the left work fine but the on-page links in
>> the lower right hand pane (which scrolls) do not work. The layout is
>> defined in the css file. There is no javascript. If you click on a
>> local link NS does a bit of processing but leaves the page unchanged.
>> There are name tags with both name and ID specified.
>
>> I suspect that NS can't cope with local links in a scrolling pane with
>> absolute positionning in CSS.
>
>NS does not do absolute positioning.
>

Yet ? or Never ?

Regards,

--
Graham Pickles
www.whitbymuseum.org.uk Whitby Museum

Re: [Rpcemu] MDF query

In article <fd2faf2353.dougjwebb@doug.j.webb.btinternet.com>,
Doug Webb <doug.j.webb@btinternet.com> wrote:
> In message <5323a6846dbob@mightyoak.org.uk>
> Bob Latham <bob@mightyoak.org.uk> wrote:

> > In article <14e743f552.old_coaster@old_coaster.yahoo.co.uk>,
> > Tony Moore <old_coaster@yahoo.co.uk> wrote:
> >> On 27 Nov 2012, Bob Latham <bob@mightyoak.org.uk> wrote:

> >> [snip]

> >>> where are you getting these MDFs from in the first place?

> >> They're included in the ROOL HardDisc4 download available at
> >> http://www.riscosopen.org/content/downloads/other-zipfiles

> > Thanks Tony for the information but how on earth do you open it? On
> > that site there appears to be two HardDisc4 downloads, neither will
> > open on any machine or software I have. What is suddenly so bad about
> > a zip file that everyone can open?


> > Bob.

> Bob

> You may have your mimemap file set to tag any unknown file type as zip
> as the HardDisc images are not zip ones but self extracting files or
> tarball files.

> So download the HardDisc4 self extracting file, change it's file type
> to &FFC, i.e select highlight the file > menu > Set file type &FFC,
> then double click on it to extract. All this is mentioned by clicking
> on the blue information icon to the right of the files on the ROOL
> website.

Hi Doug,

Thanks for the help.

Tried that, get an error, "no room to run transient"

I assume this is because I don't have RISC OS 5 machine.


Cheers,

Bob.

--
Bob Latham
Stourbridge, West Midlands

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: Failure to follow on-page links

On 26 Feb 2013 John Rickman Iyonix wrote:

>> I suspect that NS can't cope with local links in a scrolling pane with
>> absolute positionning in CSS.

> NS does not do absolute positioning.

Yes of course it does. The frames are located by absolute positioning,
not tables. The page layout is OK but NS won't jump to on-page name
anchors within the scrolling text.

--
Richard Porter http://www.minijem.plus.com/
mailto:ricp@minijem.plus.com
I don't want a "user experience" - I just want stuff that works.

Re: Failure to follow on-page links

Richard Porter wrote

> On this page and similar ones on the site

> http://www.rdtev.de/aktuelles.html

> the navigation links on the left work fine but the on-page links in
> the lower right hand pane (which scrolls) do not work. The layout is
> defined in the css file. There is no javascript. If you click on a
> local link NS does a bit of processing but leaves the page unchanged.
> There are name tags with both name and ID specified.

> I suspect that NS can't cope with local links in a scrolling pane with
> absolute positionning in CSS.

NS does not do absolute positioning.

--
John Rickman - http://rickman.orpheusweb.co.uk/lynx
Encouraged by superficial notions of evolution, Which becomes, in the
popular mind, a means of disowning the past.

Failure to follow on-page links

On this page and similar ones on the site

http://www.rdtev.de/aktuelles.html

the navigation links on the left work fine but the on-page links in
the lower right hand pane (which scrolls) do not work. The layout is
defined in the css file. There is no javascript. If you click on a
local link NS does a bit of processing but leaves the page unchanged.
There are name tags with both name and ID specified.

I suspect that NS can't cope with local links in a scrolling pane with
absolute positionning in CSS.

--
Richard Porter http://www.minijem.plus.com/
mailto:ricp@minijem.plus.com
I don't want a "user experience" - I just want stuff that works.

Re: [Rpcemu] Disc not understood

On 24 Feb 2013, Bob Latham <bob@mightyoak.org.uk> wrote:

[snip]

> I have done a bit for bit comparison of the files on the Sony laptop
> with those on the samsung and everyone tested identical

[snip]

> I have tried the format application but not actually formatted the
> [ADFS] drive as the two machines give different views of the drive.
>
> Both give the dive as RPCEmuHD but on the samsung it asks if I wish to
> retain the shape, on the sony it appears to know nothing about the
> drive and asks what type of drive it is.

Are you certain that the hd4.hdf file on the Sony machine has not been
corrupted? The hd4.hdf files available at http://www.marutan.net/rpcemu/
are pre-formatted, and re-formatting should be unnecessary.

[snip HForm screengrabs]

From the screengrabs, it seems that HForm must be running on both
machines, presumably in HostFS.

If that is so, I would suggest that, on the Sony machine, you forget
about ADFS - for now - and boot from HostFS. To do that: on the Samsung
machine, copy !Boot from ADFS to HostFS. Then move !Boot from Samsung
Program Files\RPCEmu\hostfs to Sony Program Files\RPCEmu\hostfs (via
network, or flash-drive). This should ensure that filetypes/extensions
are not screwed-up.

On the Sony machine, run RPCEmu and, when it stops with 'disc not found'
error, or whatever, issue the commands

*configure filesystem hostfs
*configure boot

then quit and re-start RPCEmu. If booting is stopped by another error,
try *unplug , then *rmreinit each unplugged module.

When you have a working machine you could then further investigate the
reason for hd4.hdf not being recognised.

Tony




_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: libwapcaplet - memory leak?

On 26/02/2013 14:11, Rob Kendrick wrote:
> On Tue, Feb 26, 2013 at 01:08:33PM +0000, Daniel Silverstone wrote:
>> On Tue, Feb 26, 2013 at 01:10:41PM +0100, François Revol wrote:
>>> At least on BeOS we can in theory include NetSurf as a replicant, where
>>> the binary is dlopen()ed and dlclosed() when removed, so using atexit()
>>> would cause quite some trouble there.
>>
>> If this is *truly* a use-case we need to think about, I'll ponder making the
>> root somehow static so it gets unloaded automatically, although that might
>> complicate rehashing.
>
> The ugly destructor approach could be workable: if for some reason a
> host's compiler doesn't support it, we can wrap it in a suitable #ifdef
> and they can deal with the leak.
>
> Or, perhaps we could have some C++ with a real destructor on a singleton
> that calls the finialisation code? Not sure how this would interact
> with multiple users an dlopen()ing of the whole thing.

Well, C++ dtors are usually called from this method anyway...
it's just more standardized at C++ level.

François.

Re: Failure to post bug report.

In article <ae6af42353.pnyoung@pnyoung.ormail.co.uk>,
Peter Young <pnyoung@ormail.co.uk> wrote:
> On 25 Feb 2013 Chris Newman <cvjazz@waitrose.com> wrote:

> > In article <ec3a032353.pnyoung@pnyoung.ormail.co.uk>,
> > Peter Young <pnyoung@ormail.co.uk> wrote:
> >> I've been trying to post this to the bug tracker:

> >> [quote]

> >> NetSurf #942 and ARMini, RISC OS 5.19 (16 May 2012).

> >> Trying to access http://www.document-records.com/ fails. It gets stuck
> >> on "Fetching, Fetching, Processing"; no timeout after two minutes,
> >> after which I got bored. This is with JavaScript either on or off. It
> >> works on Windows Firefox

> > Using #948 ditto with these...

> > http://www.georgesregisjazzband.com/

> > http://www.document-records.com/

> > Can others verify this, please? What's going on?

> BTW I managed to report the bug using an older version of NetSurf;
> #723 it was, which I happened to have lying around.

Yes. Ta. Well done that man. I was just adding the regis jazz band site to
prove it wasn't just a one off. I didn't think you had that one when you
submitted.

--
Chris

Re: libwapcaplet - memory leak?

On Tue, Feb 26, 2013 at 01:08:33PM +0000, Daniel Silverstone wrote:
> On Tue, Feb 26, 2013 at 01:10:41PM +0100, François Revol wrote:
> > At least on BeOS we can in theory include NetSurf as a replicant, where
> > the binary is dlopen()ed and dlclosed() when removed, so using atexit()
> > would cause quite some trouble there.
>
> If this is *truly* a use-case we need to think about, I'll ponder making the
> root somehow static so it gets unloaded automatically, although that might
> complicate rehashing.

The ugly destructor approach could be workable: if for some reason a
host's compiler doesn't support it, we can wrap it in a suitable #ifdef
and they can deal with the leak.

Or, perhaps we could have some C++ with a real destructor on a singleton
that calls the finialisation code? Not sure how this would interact
with multiple users an dlopen()ing of the whole thing.

B.

Re: libwapcaplet - memory leak?

On Tue, Feb 26, 2013 at 01:10:41PM +0100, François Revol wrote:
> At least on BeOS we can in theory include NetSurf as a replicant, where
> the binary is dlopen()ed and dlclosed() when removed, so using atexit()
> would cause quite some trouble there.

If this is *truly* a use-case we need to think about, I'll ponder making the
root somehow static so it gets unloaded automatically, although that might
complicate rehashing.

I'll think about it.

D.

--
Daniel Silverstone http://www.netsurf-browser.org/
PGP mail accepted and encouraged. Key Id: 3CCE BABE 206C 3B69

Re: libwapcaplet - memory leak?

On 26/02/2013 10:17, Daniel Silverstone wrote:
> On Tue, Feb 26, 2013 at 07:05:05AM +0000, John-Mark Bell wrote:
>> On Mon, 2013-02-25 at 20:10 +0000, Chris Young wrote:
>>> I must be msising something, but libwapcaplet allocates a "bucket" in
>>> lwc__initialise(). As far as I can tell, this is never freed - as
>>> there is no lwc__finalise or equivalent function?
>>
>> Correct -- this is expected. There's no safe way to do this without
>> adding a context to the lwc API. That introduces a significant amount of
>> pain in clients (as they have to pass the context around internally,
>> which is problematic when you have multiple things using it. Thus, the
>> root hash bucket is leaked, instead.
>>
>> I guess we could free the bucket when there are no strings left in it,
>> however. Daniel: any thoughts?
>
> It seems a tad silly to add a check to *every* unref to decide if we can free
> the root bucket. Esp. given that with all the libs in use, that root bucket
> will only ever empty fully immediately before exit.

Or rather actually (hopefully) before unloading the library.

At least on BeOS we can in theory include NetSurf as a replicant, where
the binary is dlopen()ed and dlclosed() when removed, so using atexit()
would cause quite some trouble there.

Instead there are almost-standard hooks for use in libraries, though the
names differ between OSes they are usually always usable.
On ELF-based things it's usually _fini() though the dlopen man on linux
mentions them as deprecated in favor of __attribute__((destructor)).

François.

Re: libwapcaplet - memory leak?

On Tue, Feb 26, 2013 at 07:05:05AM +0000, John-Mark Bell wrote:
> On Mon, 2013-02-25 at 20:10 +0000, Chris Young wrote:
> > I must be msising something, but libwapcaplet allocates a "bucket" in
> > lwc__initialise(). As far as I can tell, this is never freed - as
> > there is no lwc__finalise or equivalent function?
>
> Correct -- this is expected. There's no safe way to do this without
> adding a context to the lwc API. That introduces a significant amount of
> pain in clients (as they have to pass the context around internally,
> which is problematic when you have multiple things using it. Thus, the
> root hash bucket is leaked, instead.
>
> I guess we could free the bucket when there are no strings left in it,
> however. Daniel: any thoughts?

It seems a tad silly to add a check to *every* unref to decide if we can free
the root bucket. Esp. given that with all the libs in use, that root bucket
will only ever empty fully immediately before exit.

Perhaps in debug builds?

D.

--
Daniel Silverstone http://www.netsurf-browser.org/
PGP mail accepted and encouraged. Key Id: 3CCE BABE 206C 3B69

Re: libwapcaplet - memory leak?

On Tue, Feb 26, 2013 at 07:05:05AM +0000, John-Mark Bell wrote:
> I guess we could free the bucket when there are no strings left in it,
> however. Daniel: any thoughts?

For simply silencing valgrind, perhaps an atexit handler?

B.

Re: Failure to post bug report.

On 25 Feb 2013 Chris Newman <cvjazz@waitrose.com> wrote:

> In article <ec3a032353.pnyoung@pnyoung.ormail.co.uk>,
> Peter Young <pnyoung@ormail.co.uk> wrote:
>> I've been trying to post this to the bug tracker:

>> [quote]

>> NetSurf #942 and ARMini, RISC OS 5.19 (16 May 2012).

>> Trying to access http://www.document-records.com/ fails. It gets stuck
>> on "Fetching, Fetching, Processing"; no timeout after two minutes,
>> after which I got bored. This is with JavaScript either on or off. It
>> works on Windows Firefox

> Using #948 ditto with these...

> http://www.georgesregisjazzband.com/

> http://www.document-records.com/

> Can others verify this, please? What's going on?

BTW I managed to report the bug using an older version of NetSurf;
#723 it was, which I happened to have lying around.

With best wishes,

Peter.

--
Peter Young (zfc Ta) and family
Prestbury, Cheltenham, Glos. GL52, England
http://pnyoung.orpheusweb.co.uk
pnyoung@ormail.co.uk

Monday, 25 February 2013

Re: Parser breaking with ";" in the text field.

On Mon, 2013-02-25 at 23:37 -0500, Anil Jangam wrote:
> Team,
>
>
> I observed that HTML parser (hubbub-0.1.2) is breaking when it finds a
> SEMICOLON in the text field. I am giving below an example of the text
> string.

[...]

> <meta http-equiv="content-type" content="text/html; charset=UTF-8" />

> When it finds the ';', it stops working. When I remove this ';' from
> the string, it works fine. Can you please check, if this is an issue
> with the parser or if I am missing anything?

Can you explain what you mean by "stops working"? The output below is
exactly what I would expect to see, given the input, above.

> ELEMENT meta
> ATTRIBUTE http-equiv
> TEXT
> content=content-type
> ATTRIBUTE content
> TEXT
> content=text/html; charset=UTF-8


J.

Re: libwapcaplet - memory leak?

On Mon, 2013-02-25 at 20:10 +0000, Chris Young wrote:
> I must be msising something, but libwapcaplet allocates a "bucket" in
> lwc__initialise(). As far as I can tell, this is never freed - as
> there is no lwc__finalise or equivalent function?

Correct -- this is expected. There's no safe way to do this without
adding a context to the lwc API. That introduces a significant amount of
pain in clients (as they have to pass the context around internally,
which is problematic when you have multiple things using it. Thus, the
root hash bucket is leaked, instead.

I guess we could free the bucket when there are no strings left in it,
however. Daniel: any thoughts?


J.

Parser breaking with ";" in the text field.

Team,

I observed that HTML parser (hubbub-0.1.2) is breaking when it finds a SEMICOLON in the text field. I am giving below an example of the text string. 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<head> 
        <title>MIT - Massachusetts Institute of Technology</title> 
        <meta name="keywords" content="Massachusetts Institute of Technology, MIT" /> 
        <meta name="description" content="MIT is devoted to the advancement of knowledge and education of students in areas that contribute to or prosper in an environment of science and technology." /> 
        <meta name="robots" content="index,follow,noodp,noydir" /> 
        <meta name="allow-search" content="yes" /> 
        <meta name="language" content="en" /> 
        <meta name="distribution" content="global" /> 
        <meta http-equiv="content-type" content="text/html; charset=UTF-8" /> 


When it finds the ';', it stops working. When I remove this ';' from the string, it works fine. Can you please check, if this is an issue with the parser or if I am missing anything?

I am pasting below the output of the parser (i.e. ./libxml) mit-edu.htm is the HTML weg page I am giving as inputs. 

anilj@ubuntu:~/apache/sandbox/hubbub-0.1.2/examples$ ./libxml mit-edu.htm
WARNING: Failed creating namespace xml
HTML DOCUMENT
standalone=true
  DTD(html), PUBLIC -//W3C//DTD XHTML 1.0 Transitional//EN, SYSTEM http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd
  ELEMENT html
    default namespace href=http://www.w3.org/1999/xhtml
    namespace math href=http://www.w3.org/1998/Math/MathML
    namespace svg href=http://www.w3.org/2000/svg
    namespace xlink href=http://www.w3.org/1999/xlink
    namespace xmlns href=http://www.w3.org/2000/xmlns/
    ATTRIBUTE xmlns
      TEXT
        content=http://www.w3.org/1999/xhtml
    ELEMENT head
      TEXT
        content=   
      ELEMENT title
        TEXT
          content=MIT - Massachusetts Institute of Technol...
      TEXT
        content=   
      ELEMENT meta
        ATTRIBUTE name
          TEXT
            content=keywords
        ATTRIBUTE content
          TEXT
            content=Massachusetts Institute of Technology, M...
      TEXT
        content=   
      ELEMENT meta
        ATTRIBUTE name
          TEXT
            content=description
        ATTRIBUTE content
          TEXT
            content=MIT is devoted to the advancement of kno...
      TEXT
        content=    
      ELEMENT meta
        ATTRIBUTE name
          TEXT
            content=robots
        ATTRIBUTE content
          TEXT
            content=index,follow,noodp,noydir
      TEXT
        content=   
      ELEMENT meta
        ATTRIBUTE name
          TEXT
            content=allow-search
        ATTRIBUTE content
          TEXT
            content=yes
      TEXT
        content=   
      ELEMENT meta
        ATTRIBUTE name
          TEXT
            content=language
        ATTRIBUTE content
          TEXT
            content=en
      TEXT
        content=   
      ELEMENT meta
        ATTRIBUTE name
          TEXT
            content=distribution
        ATTRIBUTE content
          TEXT
            content=global
      TEXT
        content=   
      ELEMENT meta
        ATTRIBUTE http-equiv
          TEXT
            content=content-type
        ATTRIBUTE content
          TEXT
            content=text/html; charset=UTF-8

Re: Failure to post bug report.

On Mon, 25 Feb 2013 23:13:07 +0000 (GMT), Chris Newman wrote:

> > Trying to access http://www.document-records.com/ fails. It gets stuck
> > on "Fetching, Fetching, Processing"; no timeout after two minutes,
> > after which I got bored. This is with JavaScript either on or off. It
> > works on Windows Firefox
>
> Using #948 ditto with these...
>
> http://www.georgesregisjazzband.com/
>
>
> http://www.document-records.com/
>
>
> Can others verify this, please? What's going on?

Same here, also getting it with http://www.amiga.org

Chris

Re: Failure to post bug report.

In article <ec3a032353.pnyoung@pnyoung.ormail.co.uk>,
Peter Young <pnyoung@ormail.co.uk> wrote:
> I've been trying to post this to the bug tracker:

> [quote]

> NetSurf #942 and ARMini, RISC OS 5.19 (16 May 2012).

> Trying to access http://www.document-records.com/ fails. It gets stuck
> on "Fetching, Fetching, Processing"; no timeout after two minutes,
> after which I got bored. This is with JavaScript either on or off. It
> works on Windows Firefox

Using #948 ditto with these...

http://www.georgesregisjazzband.com/


http://www.document-records.com/


Can others verify this, please? What's going on?

--
Chris

Re: New textareas and text inputs

On Fri, 22 Feb 2013 16:30:25 +0000 (GMT), Michael Drake wrote:

> > I think I've tracked this down to a bug (or "implementation feature",
> > perhaps) in OS4's newlib realloc() function.
>
> > If I allocate 10MB of memory in 64 byte chunks using realloc(), around
> > 400MB of RAM gets eaten up, rather than the expected 10MB.
>
> Does it behave any better now? I've increased the step size.

Doesn't seem to have made an awful lot of difference, unfortunately.
I do seem to have finally convinced the PTB that there is a problem
though.

> In other news, the Amiga ami_file_save_req() behaviour needs attention,
> because browser_window_get_selection() now returns a pointer to a char *
> string that the client must free, rather than the old textplain/html
> struct selection object.

I think I've obliterated struct selection* from the frontend now.

Chris

libwapcaplet - memory leak?

I must be msising something, but libwapcaplet allocates a "bucket" in
lwc__initialise(). As far as I can tell, this is never freed - as
there is no lwc__finalise or equivalent function?

The strings destroy themselves, but this doesn't free up the original
"bucket".

Chris

Re: [Rpcemu] MDF query

In message <5323a6846dbob@mightyoak.org.uk>
Bob Latham <bob@mightyoak.org.uk> wrote:

> In article <14e743f552.old_coaster@old_coaster.yahoo.co.uk>,
> Tony Moore <old_coaster@yahoo.co.uk> wrote:
>> On 27 Nov 2012, Bob Latham <bob@mightyoak.org.uk> wrote:

>> [snip]

>>> where are you getting these MDFs from in the first place?

>> They're included in the ROOL HardDisc4 download available at
>> http://www.riscosopen.org/content/downloads/other-zipfiles

> Thanks Tony for the information but how on earth do you open it? On that
> site there appears to be two HardDisc4 downloads, neither will open on any
> machine or software I have. What is suddenly so bad about a zip file that
> everyone can open?


> Bob.

Bob

You may have your mimemap file set to tag any unknown file type as zip
as the HardDisc images are not zip ones but self extracting files or
tarball files.

So download the HardDisc4 self extracting file, change it's file type
to &FFC, i.e select highlight the file > menu > Set file type &FFC,
then double click on it to extract. All this is mentioned by clicking
on the blue information icon to the right of the files on the ROOL
website.

Doug


--
See and experience the future using ARM Technology - BeagleBoard -xM,
Cortex A8 and RISC OS 5.19.

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: [Rpcemu] MDF query

In article <14e743f552.old_coaster@old_coaster.yahoo.co.uk>,
Tony Moore <old_coaster@yahoo.co.uk> wrote:
> On 27 Nov 2012, Bob Latham <bob@mightyoak.org.uk> wrote:

> [snip]

> > where are you getting these MDFs from in the first place?

> They're included in the ROOL HardDisc4 download available at
> http://www.riscosopen.org/content/downloads/other-zipfiles

Thanks Tony for the information but how on earth do you open it? On that
site there appears to be two HardDisc4 downloads, neither will open on any
machine or software I have. What is suddenly so bad about a zip file that
everyone can open?


Bob.

--
Bob Latham
Stourbridge, West Midlands

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: rendering of Wikipedia page

In message <53239574c5chris@chris-johnson.org.uk>
cj <chris@chris-johnson.org.uk> wrote:

> In article <fd87902353.jim@nails.abbeypress.net>,
> Jim Nagel <netsurf@abbeypress.co.uk> wrote:
>> John Rickman Iyonix wrote on 24 Feb:
>> > Are there any pages in particular that are slow?
>> > Wikipedia pages on my Iyonix typically take 2 to 3 seconds to load and
>> > render which is quick enough.
>
>> On Netsurf home page, click Wikipedia link. The Wikipedia home page
>> takes 10 seconds to appear on my Iyonix (Ro 5.18, Netsurf #891).
>
>> Today's home page happens to have a link to "exoplanet". I clicked
>> that; the Wikipedia entry
>> http://en.wikipedia.org/wiki/Extrasolar_planet took 10 seconds to
>> appear.
>
>> Click a typical link on that page.
>> http://en.wikipedia.org/wiki/Extragalactic_planet takes 9 seconds to
>> appear.
>
> Timings seem a bit different here (RISC OS 5.19, 24 Feb 2013),
> NetSurf #946 js disabled.
>
> Test 1: 3.8s rather than 10s
> Test 2: 13s rather than 10s
> Test 3: 2.8s rather than 9s
>
> YMMV
>
I think I may have misunderstood the question in my earlier response.
Anyway, results here under RPCEmu089/402 (system details as before, NS
#932, json):

Test 1: 3s
Test 2: 8s
Test 3: 2s

The thought occurs that local network speed could have a bearing on
these results as well.

G

--
george greenfield

Re: rendering of Wikipedia page

In message <20130225145543.GD8593@pepperfish.net>
Rob Kendrick <rjek@netsurf-browser.org> wrote:

> On Mon, Feb 25, 2013 at 02:00:01PM +0000, Jim Nagel wrote:
>> John Rickman Iyonix wrote on 24 Feb:
[snip]
>>
>> On Netsurf home page, click Wikipedia link. The Wikipedia home page
>> takes 10 seconds to appear on my Iyonix (Ro 5.18, Netsurf #891).
>
> What were you expecting from a system built using a CPU meant to go on
> SCSI cards from 2002 and an OS with a dire IO layer?
>
> To be honest, I'm amazed it only takes 10 seconds.
>
> B.
>
I tried the same test here (RPCEmu089/402 running on a Win7/64 Dell
XPS [3.4Ghz quad-core i7 CPU] and it averaged 3 secs over 3 or 4
tries, which seems quite acceptable. I suspect the performance on an
Pandaboard ES or Armini would be similar, or better. The Iyo was a
fine machine in its day but it is now obsolescent, even by RISC OS
standards. On the PC side using Firefox the page takes under a second.
Cheers,

George

--
george greenfield

Re: rendering of Wikipedia page

Jim Nagel wrote

> John Rickman Iyonix wrote on 24 Feb:
>> Are there any pages in particular that are slow?
>> Wikipedia pages on my Iyonix typically take 2 to 3 seconds to load and
>> render which is quick enough.

the Wikipedia entry
> http://en.wikipedia.org/wiki/Extrasolar_planet took 10 seconds to
> appear.

Its nothing to do with Wikipedia just a function of page content size.

A full save of the above page shows it to comprise 492 files with a
total size of 780 kbytes.

http://www.wikipedia.org/ takes 2 seconds to load and comprises 56 files total size 14 kbytes

John

--
John Rickman - http://rickman.orpheusweb.co.uk/lynx

Re: rendering of Wikipedia page

On Mon, Feb 25, 2013 at 02:00:01PM +0000, Jim Nagel wrote:
> John Rickman Iyonix wrote on 24 Feb:
> > Are there any pages in particular that are slow?
> > Wikipedia pages on my Iyonix typically take 2 to 3 seconds to load and
> > render which is quick enough.
>
> On Netsurf home page, click Wikipedia link. The Wikipedia home page
> takes 10 seconds to appear on my Iyonix (Ro 5.18, Netsurf #891).

What were you expecting from a system built using a CPU meant to go on
SCSI cards from 2002 and an OS with a dire IO layer?

To be honest, I'm amazed it only takes 10 seconds.

B.

Re: rendering of Wikipedia page

In article <fd87902353.jim@nails.abbeypress.net>,
Jim Nagel <netsurf@abbeypress.co.uk> wrote:
> John Rickman Iyonix wrote on 24 Feb:
> > Are there any pages in particular that are slow?
> > Wikipedia pages on my Iyonix typically take 2 to 3 seconds to load and
> > render which is quick enough.

> On Netsurf home page, click Wikipedia link. The Wikipedia home page
> takes 10 seconds to appear on my Iyonix (Ro 5.18, Netsurf #891).

> Today's home page happens to have a link to "exoplanet". I clicked
> that; the Wikipedia entry
> http://en.wikipedia.org/wiki/Extrasolar_planet took 10 seconds to
> appear.

> Click a typical link on that page.
> http://en.wikipedia.org/wiki/Extragalactic_planet takes 9 seconds to
> appear.

Timings seem a bit different here (RISC OS 5.19, 24 Feb 2013),
NetSurf #946 js disabled.

Test 1: 3.8s rather than 10s
Test 2: 13s rather than 10s
Test 3: 2.8s rather than 9s

YMMV

--
Chris Johnson

Re: rendering of Wikipedia page

John Rickman Iyonix wrote on 24 Feb:
> Are there any pages in particular that are slow?
> Wikipedia pages on my Iyonix typically take 2 to 3 seconds to load and
> render which is quick enough.

On Netsurf home page, click Wikipedia link. The Wikipedia home page
takes 10 seconds to appear on my Iyonix (Ro 5.18, Netsurf #891).

Today's home page happens to have a link to "exoplanet". I clicked
that; the Wikipedia entry
http://en.wikipedia.org/wiki/Extrasolar_planet took 10 seconds to
appear.

Click a typical link on that page.
http://en.wikipedia.org/wiki/Extragalactic_planet takes 9 seconds to
appear.

--
Jim Nagel www.archivemag.co.uk
>> "from" address is genuine but will change. website has current one.

Sunday, 24 February 2013

Re: New clipboard API for front ends, and textarea progress

On 24 Feb, John-Mark Bell wrote in message
<1361747132.31757.59.camel@duiker>:

> However, I believe this is mostly moot, as we fixed paste some time ago:
>
>
http://git.netsurf-browser.org/netsurf.git/commit/?id=64ae9e8693aaaf09cb4e35b63d029d446ef361b0

Ah, OK -- I'd clearly not spotted that. Thanks!

--
Steve Fryatt - Leeds, England

http://www.stevefryatt.org.uk/

Re: New clipboard API for front ends, and textarea progress

On Sun, 2013-02-24 at 22:34 +0000, Steve Fryatt wrote:
> On 10 Jan, Michael Drake wrote in message
> <530c15f056tlsa@netsurf-browser.org>:
>
> > In article <mpro.mgds7b08fs7pc01k4.lists@stevefryatt.org.uk>,
> > Steve Fryatt <lists@stevefryatt.org.uk> wrote:
> >
> > > That shouldn't be too tricky to sort. I'm a little busy at present with
> > > non-computery stuff, but will try to take a look in the next couple of
> > > weeks if no-one else gets there first.
> >
> > Thanks, that would be great. All pastes happen as a result of passing
> > KEY_PASTE to the core, so maybe it can request the clipboard when it
> > passes that to the core, and then when the core calls gui_get_clipboard,
> > maybe it will be easier?
>
> Will gui_poll be called between KEY_PASTE being passed in to the core and
> gui_get_clipboard getting called in return? This would potentially have to
> happen several times -- and the required number could vary.

gui_poll will only ever be called from the main loop. Beyond that, we
make no guarantees.

> If not, is it safe for the front end to go and make calls to its own
> gui_poll before gui_get_clipboard returns? I'm not sure[1] if RISC OS
> defines that a task won't get other poll codes (eg. redraws) during a
> message exchange; if not, then there's the potential for the core to get
> called for something else before gui_get_clipboard returns.

User messages have higher priority than any others, so if there is a
user message waiting, it will take precedence. I don't believe this
permits any assumption about not having to deal with other message types
(or other user messages, for that matter)

However, I believe this is mostly moot, as we fixed paste some time ago:

http://git.netsurf-browser.org/netsurf.git/commit/?id=64ae9e8693aaaf09cb4e35b63d029d446ef361b0


J.

Re: New clipboard API for front ends, and textarea progress

On 10 Jan, Michael Drake wrote in message
<530c15f056tlsa@netsurf-browser.org>:

> In article <mpro.mgds7b08fs7pc01k4.lists@stevefryatt.org.uk>,
> Steve Fryatt <lists@stevefryatt.org.uk> wrote:
>
> > That shouldn't be too tricky to sort. I'm a little busy at present with
> > non-computery stuff, but will try to take a look in the next couple of
> > weeks if no-one else gets there first.
>
> Thanks, that would be great. All pastes happen as a result of passing
> KEY_PASTE to the core, so maybe it can request the clipboard when it
> passes that to the core, and then when the core calls gui_get_clipboard,
> maybe it will be easier?

Will gui_poll be called between KEY_PASTE being passed in to the core and
gui_get_clipboard getting called in return? This would potentially have to
happen several times -- and the required number could vary.

If not, is it safe for the front end to go and make calls to its own
gui_poll before gui_get_clipboard returns? I'm not sure[1] if RISC OS
defines that a task won't get other poll codes (eg. redraws) during a
message exchange; if not, then there's the potential for the core to get
called for something else before gui_get_clipboard returns.


1. Does anyone reading know?

--
Steve Fryatt - Leeds, England Wakefield Acorn & RISC OS Show
Saturday 20 April 2013
http://www.stevefryatt.org.uk/ http://www.wakefieldshow.org.uk/

Re: rendering of Wikipedia page

Jim Nagel wrote

> I've wondered for a long time what it is about Wikipedia's layout
> (or something) that makes its pages so slow to render in Netsurf,
> compared to the time to render the same page in Firefox or Chrome.

> (Netsurf #891 here at the moment, but the wondering goes back a lot
> further; running on RiscPC or Iyonix. Firefox or Chrome on XP.)

> This isn't a criticism, just a curious Q about technicalities
> involved.

Are there any pages in particular that are slow?
Wikipedia pages on my Iyonix typically take 2 to 3 seconds to load and
render which is quick enough.

John

--
John Rickman - http://rickman.orpheusweb.co.uk/lynx
John Rickman - http://mug.riscos.org/

Re: [Rpcemu] Disc not understood

In article <5323115d8fbob@mightyoak.org.uk>,
Bob Latham <bob@mightyoak.org.uk> wrote:
[Snippy]
> Has anyone got RPCEmu working on a 64bit W7 prof laptop?

> Cheers,

> Bob.

Not specifically on a Laptop, but on two Desktop PCs running under 64 bit
W7 Pro I have RPCEmus 0.8.9 running RO 6.20 on 2.55 Gigabytes hd4.hdf

(I do have various versions of it running on a Win 7 Home laptop).

Obviously from my past notes/reasons I've not bothered with Networking,
but when I get around to it, I will on one of the machines again.

It was ages ago that I created my Hd4.hdf for RPCEmu but as it's
substantially larger than the supplied one, I must have formatted using I
think, Select 4.39 and HForm 2.56 but I really can't remember now.

Select RO 6.20 has Hform 2.59 but we've only recently been able to upgrade
the RPCEmus to 6.20.

Dave

FWIW. I have a number of test installs on the two Win 7 Pro machines,
using different RO versions and different Hd4.hdf sizes, the largest being
9+ Gigs.

D.

--

Dave Triffid

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

Re: Failure to post bug report.

On 24 Feb 2013 David Pitt <pittdj@pittdj.co.uk> wrote:

> Peter Young, on 24 Feb, wrote:

>> I've been trying to post this to the bug tracker:
>>
>> [quote]
>>
>> NetSurf #942 and ARMini, RISC OS 5.19 (16 May 2012).
>>
>> Trying to access http://www.document-records.com/ fails. It gets stuck on
>> "Fetching, Fetching, Processing"; no timeout after two minutes, after
>> which I got bored. This is with JavaScript either on or off. It works on
>> Windows Firefox

> This problem starts with #911, #910 renders the site promptly.

> (I have failed to find a way to see changes prior to #917 on Jenkins.)

Also, it seems now that you need to use an older version of NetSurf to
post a bug report! I've just posted the report using a copy of #723
that I have lying around.

BTW, who or what is Jenkins?

With best wishes,

Peter.

--
Peter Young (zfc Ta) and family
Prestbury, Cheltenham, Glos. GL52, England
http://pnyoung.orpheusweb.co.uk
pnyoung@ormail.co.uk

Re: rendering of Wikipedia page

On Sun, Feb 24, 2013 at 02:49:20PM +0000, Jim Nagel wrote:
> I've wondered for a long time what it is about Wikipedia's layout
> (or something) that makes its pages so slow to render in Netsurf,
> compared to the time to render the same page in Firefox or Chrome.
>
> (Netsurf #891 here at the moment, but the wondering goes back a lot
> further; running on RiscPC or Iyonix. Firefox or Chrome on XP.)

My initial thought is probably that they're typically large pieces of
HTML and CSS, and your PC is probably an order of magnatude faster than
your RISC OS machines.

(Plus, Wikipedia, if I recall correctly, uses large images with CSS
clipping. This is a big performance win if you have a disc cache, which
we don't, and Firefox/Chrome do.)

B.

Re: [Rpcemu] Disc not understood

Hi,

I'm still struggling with the same problem a month on and it still baffles
me.

I have done a bit for bit comparison of the files on the Sony laptop with
those on the samsung and everyone tested identical. these included:

rpc.cfg, cmos.ram and the rom itself.

I've tried *status and *unplug and they are different but that must be due
to the samsung booting and the sony not booting. the only difference in
the unplug is MimeMap which is killed on the one machine but not the other.

In *status all filing system and drive configurations look the same to me.

I have tried the format application but not actually formatted the drive
as the two machines give different views of the drive.

Both give the dive as RPCEmuHD but on the samsung it asks if I wish to
retain the shape, on the sony it appears to know nothing about the drive
and asks what type of drive it is.

I can find no way to get this text out of the emulation environment to
print it here.

Images of the two format setups are available here...


http://www.mightyoak.org.uk/RPCEmu/samsung.jpg
and
http://www.mightyoak.org.uk/RPCEmu/sony.jpg


Help please, I've made zero progress with this problem in two months!

Has anyone got RPCEmu working on a 64bit W7 prof laptop?



Cheers,

Bob.




In article <531405b9e1bob@mightyoak.org.uk>,
Bob Latham <bob@mightyoak.org.uk> wrote:
> In article <aafb401253.old_coaster@old_coaster.yahoo.co.uk>,
> Tony Moore <old_coaster@yahoo.co.uk> wrote:


> > Try also copying cmos.ram, and rpc.cfg, from Samsung to Sony. When
> > 'disc not understood' occurs what are the responses to *status and to
> > *unplug?

> > Tony


> Noted, thanks. I'll look into this next time I get the chance.


> Cheers,

> Bob.

--
Bob Latham
Stourbridge, West Midlands

_______________________________________________
Rpcemu mailing list
Rpcemu@riscos.info
http://www.riscos.info/cgi-bin/mailman/listinfo/rpcemu

rendering of Wikipedia page

I've wondered for a long time what it is about Wikipedia's layout
(or something) that makes its pages so slow to render in Netsurf,
compared to the time to render the same page in Firefox or Chrome.

(Netsurf #891 here at the moment, but the wondering goes back a lot
further; running on RiscPC or Iyonix. Firefox or Chrome on XP.)

This isn't a criticism, just a curious Q about technicalities
involved.

--
Jim Nagel www.archivemag.co.uk
>> "from" address is genuine but will change. website has current one.

Re: Failure to post bug report.

Peter Young, on 24 Feb, wrote:

> I've been trying to post this to the bug tracker:
>
> [quote]
>
> NetSurf #942 and ARMini, RISC OS 5.19 (16 May 2012).
>
> Trying to access http://www.document-records.com/ fails. It gets stuck on
> "Fetching, Fetching, Processing"; no timeout after two minutes, after
> which I got bored. This is with JavaScript either on or off. It works on
> Windows Firefox

This problem starts with #911, #910 renders the site promptly.

(I have failed to find a way to see changes prior to #917 on Jenkins.)


--
David Pitt

Failure to post bug report.

I've been trying to post this to the bug tracker:

[quote]

NetSurf #942 and ARMini, RISC OS 5.19 (16 May 2012).

Trying to access http://www.document-records.com/ fails. It gets stuck
on "Fetching, Fetching, Processing"; no timeout after two minutes,
after which I got bored. This is with JavaScript either on or off. It
works on Windows Firefox

[unquote]

with a zipped logfile 10K in size. I get the dreaded "XSRF Attempt
detected" message. Please, what am I doing wrong?

With best wishes,

Peter.

--
Peter Young (zfc Ta) and family
Prestbury, Cheltenham, Glos. GL52, England
http://pnyoung.orpheusweb.co.uk
pnyoung@ormail.co.uk

Friday, 22 February 2013

Re: New clipboard API for front ends, and textarea progress

In article <530af2254btlsa@netsurf-browser.org>,
Michael Drake <tlsa@netsurf-browser.org> wrote:
> In article
> <OUT-50EC6665.MD-1.4.17.chris.young@unsatisfactorysoftware.co.uk>,
> Chris Young <chris.young@unsatisfactorysoftware.co.uk> wrote:

> > If not functions like can_copy(), can_paste(), can_select etc would be
> > very nice to have.

> OK, noted.

We now have browser_window_get_editor_flags(), which provides this info
for browser windows. I've updated the front ends to use it.

There is no such thing for treeview windows as yet. I'm planning a
treeview rewrite anyway.

--

Michael Drake (tlsa) http://www.netsurf-browser.org/

Re: New textareas and text inputs

In article
<OUT-512279AE.MD-1.4.17.chris.young@unsatisfactorysoftware.co.uk>,
Chris Young <chris.young@unsatisfactorysoftware.co.uk> wrote:

> I think I've tracked this down to a bug (or "implementation feature",
> perhaps) in OS4's newlib realloc() function.

> If I allocate 10MB of memory in 64 byte chunks using realloc(), around
> 400MB of RAM gets eaten up, rather than the expected 10MB.

Does it behave any better now? I've increased the step size.

In other news, the Amiga ami_file_save_req() behaviour needs attention,
because browser_window_get_selection() now returns a pointer to a char *
string that the client must free, rather than the old textplain/html
struct selection object.

I've already fixed up the obvious stuff.

--

Michael Drake (tlsa) http://www.netsurf-browser.org/

Re: Ctrl-U and password fields

In article <63b9281f53.harriet@blueyonder.co.uk>,
Harriet Bazley <lists@orange.wingsandbeaks.org.uk> wrote:
> Ctrl-U doesn't seem to work on password fields any more - seems to work
> OK on other text entry fields.

Should be fixed now.

--

Michael Drake (tlsa) http://www.netsurf-browser.org/

Thursday, 21 February 2013

Re: New textareas and text inputs

On 21 Feb 2013 23:39:30 +0000, Chris Young wrote:

> I do
> at least have a buildable version here that doesn't exhibit problems
> after using a few textareas.

Spoke too soon. I don't.

Chris

Re: New textareas and text inputs

On 19 Feb 2013 18:39:40 +0000, Chris Young wrote:

> > > Any suggestions on how to work around this bug would be appreciated,
> > > as I have no idea how long it will take for a fix to be released.
> >
> > UnixLib, the C (and more) runtime library used in the RISC OS NS build,
> > uses Doug Lea's malloc implementation (cfr. http://g.oswego.edu/dl/).
> > Perhaps that's an option for you.
>
> That's what I was trying to link in as a replacement, but some
> libraries still seemed to be using the newlib one, hence freeing
> within NetSurf was crashing. I'm wondering whether that is due to the
> lib in question being dynamically linked. I'll have a play with my
> buildsystem and try again.

Actualy dlmalloc needs some work to actually allocate memory, which is
beyond my understanding, so I wrote something a bit simpler.

However, same problem. It works fine if I replace all the textarea
internal malloc/realloc/free calls to use my functions, but a global
replacement of them isn't working for some reason.

I suspect crazy #ifdeffing of that file won't go down well, but I do
at least have a buildable version here that doesn't exhibit problems
after using a few textareas.

I'm toying with the idea of patching newlib.library, as convincing
either developers or betatesters that it has a problem is proving more
difficult than anticipated. Might be a _very_ bad idea though - I'd
rather work around it in NetSurf as that has less wide-ranging
consequences.

Chris

Re: #910

On 19 Feb 2013 John Williams <JohnRW@ukgateway.net> wrote:


> #910 breaks Google - form submission doesn't work!

Whatever it was that was causing it (and the Google page was looking
rather different when the submission didn't work) it now seems to be
OK in #913.

With best wishes,

Peter.

--
Peter Young (zfc Ta) and family
Prestbury, Cheltenham, Glos. GL52, England
http://pnyoung.orpheusweb.co.uk
pnyoung@ormail.co.uk

Re: 906 js crashes immediately on anything

On Tue, 19 Feb 2013 22:53:09 +0000, John-Mark Bell wrote:
>
> On Tue, 2013-02-19 at 20:02 +0000, Dave Higton wrote:
>> In message <20130219104639.GK27215@somnambulist.local>
>> Daniel Silverstone <dsilvers@netsurf-browser.org> wrote:
>
>>> The best way to report bugs is to report them on the tracker. Dealing
>>> with
>>> duplicates or 'already fixed's on the tracker is easy. Ditto if you
>>> look
>>> on the tracker and see your issue reported already, please do be sure
>>> to
>>> actually go through the report and ensure that you cannot add any extra
>>> data. We need every hint we can get.
>>
>> It was one of those things where "they can't possibly have tested this,
>> it just goes clickbang".
>>
>> It also sounds like what Harriet was reporting on 904.
>
> I find that hard to believe, unless there's something you've omitted to
> tell us about exactly what you were doing to provoke the issue you saw.
> Harriet's issue was very specifically to do with launching items from
> the Hotlist so, if you were doing anything else, then it's almost
> certain that your problem isn't actually fixed.

I was trying to launch items from the hotlist.

Dave

PS John: My apologies for sending you a separate copy; I wasn't paying
enough attention to what the webmail client was doing.

____________________________________________________________
FREE 3D EARTH SCREENSAVER - Watch the Earth right on your desktop!
Check it out at http://www.inbox.com/earth

Wednesday, 20 February 2013

Re: #910

On 19 Feb 2013 John Williams <JohnRW@ukgateway.net> wrote:


> #910 breaks Google - form submission doesn't work!

Confirmed here. Whose turn is it to submit a bug report? I will run
out of time today.

With best wishes,

Peter.

--
Peter Young (zfc Ta) and family
Prestbury, Cheltenham, Glos. GL52, England
http://pnyoung.orpheusweb.co.uk
pnyoung@ormail.co.uk

Re: Latest builds crash when opening hotlist/history pages

On 20 Feb 2013 Harriet Bazley <lists@orange.wingsandbeaks.org.uk>
wrote:

> On 19 Feb 2013 as I do recall,
> Peter Young wrote:

>> On 19 Feb 2013 Peter Young <pnyoung@ormail.co.uk> wrote:
>>
>>> On 19 Feb 2013 Harriet Bazley <lists@orange.wingsandbeaks.org.uk>
>>> wrote:
>>
>>>> Builds json-904 and json-904 both crash when I double-click on a hotlist
>>>> or browsing history entry with a stack backtrace along the following
>>>> lines:
>>
>>> Yes, confirmed that this happens on this ARMini; #898 is OK, though.
>>> Which one of us will submit a bug report?
>>
>> I've reported it, with a characteristic typo in the description field.
>>
> Thanks. It's amazing how dependent on hotlists one becomes!

And it's been fixed already! Many thanks to jmb.

With best wishes,

Peter.

--
Peter Young (zfc Ta) and family
Prestbury, Cheltenham, Glos. GL52, England
http://pnyoung.orpheusweb.co.uk
pnyoung@ormail.co.uk

Tuesday, 19 February 2013

Re: Latest builds crash when opening hotlist/history pages

On 19 Feb 2013 as I do recall,
Peter Young wrote:

> On 19 Feb 2013 Peter Young <pnyoung@ormail.co.uk> wrote:
>
> > On 19 Feb 2013 Harriet Bazley <lists@orange.wingsandbeaks.org.uk>
> > wrote:
>
> >> Builds json-904 and json-904 both crash when I double-click on a hotlist
> >> or browsing history entry with a stack backtrace along the following
> >> lines:
>
> > Yes, confirmed that this happens on this ARMini; #898 is OK, though.
> > Which one of us will submit a bug report?
>
> I've reported it, with a characteristic typo in the description field.
>
Thanks. It's amazing how dependent on hotlists one becomes!

--
Harriet Bazley == Loyaulte me lie ==

NOBODY EXPECTS THE SPANISH INQUISITION!

Re: 906 js crashes immediately on anything

On Tue, 2013-02-19 at 20:02 +0000, Dave Higton wrote:
> In message <20130219104639.GK27215@somnambulist.local>
> Daniel Silverstone <dsilvers@netsurf-browser.org> wrote:

> > The best way to report bugs is to report them on the tracker. Dealing with
> > duplicates or 'already fixed's on the tracker is easy. Ditto if you look
> > on the tracker and see your issue reported already, please do be sure to
> > actually go through the report and ensure that you cannot add any extra
> > data. We need every hint we can get.
>
> It was one of those things where "they can't possibly have tested this,
> it just goes clickbang".
>
> It also sounds like what Harriet was reporting on 904.

I find that hard to believe, unless there's something you've omitted to
tell us about exactly what you were doing to provoke the issue you saw.
Harriet's issue was very specifically to do with launching items from
the Hotlist so, if you were doing anything else, then it's almost
certain that your problem isn't actually fixed.


J.

Re: 906 js crashes immediately on anything

In message <20130219104639.GK27215@somnambulist.local>
Daniel Silverstone <dsilvers@netsurf-browser.org> wrote:

> On Mon, Feb 18, 2013 at 09:53:15PM +0000, Dave Higton wrote:
> > I'm hoping this is a well known thing...
>
> Even if it is, more data helps fix bugs. You can work out if it's well
> known by looking at the bug tracker.
>
> > I'm assuming that it's so easy to reproduce that you won't need a bug
> > report: if you do, though, I'll be happy to submit one.
>
> To assume is to make an 'ass' out of 'u' and 'me'. Please provide a
> detailed report explaining what you're doing, what you're fetching, what
> happens, what version of NS you're running on what version of what OS, etc.
> It's critical that you give us as much data as possible so we can reproduce
> the issue as you see it. Otherwise we may find and fix a bug but not the
> one you're seeing.
>
> The best way to report bugs is to report them on the tracker. Dealing with
> duplicates or 'already fixed's on the tracker is easy. Ditto if you look
> on the tracker and see your issue reported already, please do be sure to
> actually go through the report and ensure that you cannot add any extra
> data. We need every hint we can get.

It was one of those things where "they can't possibly have tested this,
it just goes clickbang".

It also sounds like what Harriet was reporting on 904.

Anyway, whatever it was, it's fixed in 908.

Dave

____________________________________________________________
GET FREE SMILEYS FOR YOUR IM & EMAIL - Learn more at http://www.inbox.com/smileys
Works with AIM®, MSN® Messenger, Yahoo!® Messenger, ICQ®, Google Talk™ and most webmails